When we talk about pervasive computing, we're usually talking about mobile devices like cell phones or, if we're being really exotic, the various sorts of wearable gizmos that get made fun of in Dilbert cartoons. But I look at pervasive from the other end of the pipe. Hence, The Pervasive Datacenter, the name of the blog that kicks off with this post. From my point of view, it's the datacenter, the software that it runs, and its connections that are everywhere just as much as the peripherals out at the end of the network.
This blog will have its home base in the datacenter itself and will cover topics from servers big and small, to multi-core processors, to operating systems, to virtualization, to power and cooling concerns. However, it will also look at the software and the services out in the network cloud that are consuming datacenter computing cycles and storage and thereby determining the future of the back-end. I'll also spend some time on the bigger questions: Is Software as a Service the next big thing or merely Application Service Providers warmed over? What's the future of Open Source in a Web-delivered software model? Do operating systems even matter any longer?
And, because my premise is that the pervasive datacenter touches everything, I'll feel free to, now and then, head out to the very edge of the network. I'll try to stay clear of overly trendy and self-referential debates, but will write about important trends in client devices from UltraMobile PCs to cameras and the services that run on them.
We often talk about silos in IT. The storyline usually goes something like this. The server guys (computer gear) don't talk to the storage guys (SANs and Fibre Channel) don't talk to the network gals (all that Ethernet and other comms stuff). It's all true enough, of course. But notice something? Facilities doesn't even tend to get mentioned when bemoaning IT silos. All that HVAC and power gear is just part of the landscape. IT folks didn't need to know about bricks. Why should they need to know about power and cooling? Maybe a little UPS here and there, but the big stuff is Someone Else's Problem.
I suspect that part of the issue is language. Back before IBM did its full-court press to make the System z mainframe cool (and relevant) again, its presentations and documentation were clearly intended only for the priesthood. Whether talking CECs or DASD, FICON or CICS, or arcane pricing models, the effect (intended or not) was to hang a "No Trespassing" sign outside the mainframe tree house. When IBM began modernizing System z for new workloads and uses, one of the many challenges it faced (and still faces to a more limited degree) was to make the mainframe not just appealing, but even intelligible, to outsiders. The task was made no easier by the fact that so many of the people involved in the effort had spent their entire careers working with the mainframe in its many incarnations. Basic assumptions about the very nature of the mainframe were so deeply-held that it took real effort to externalize them in a comprehensible and meaningful way. (This presentation isn't from IBM but illustrates just how foreign-sounding deep mainframe discussions can be.)
I think we're going to see something similar happen with power and cooling. P&C are becoming an important part of the datacenter agenda. Yes, we're in a bit of an overheated hype curve about the whole topic but that doesn't mean it's not important. As a result, companies like Liebert--long-time makers of computer room power gear--are starting to show up at IT tradeshows and brief IT analysts.
I had one such briefing recently from Liebert that included much interesting material including the Liebert NX "Capacity on Demand" UPS and forward-looking discussion about datacenter power distribution. But, based on my own experience around computer systems design, I think that Liebert and other P&C vendors should understand that even electrical engineers who design servers don't know much more about analog electrical systems than the average homeowner--and probably less than the typical electrician.
HVAC vocabulary can be arcane and truly in-depth discussions of redundant facilities power more so. (For example, by Liebert's count, high availability power configurations can come in five different bus configurations, each of which is idea for a specific type of environment.) There's a certain inherent complexity in these matters of course. However, that doesn't change the reality that if IT managers are going to be increasingly involved with power and cooling decisions and configurations, the companies selling that gear are going to have to speak the right language.
Last month I wrote a research note about some of the changes going on with the desktop PC. We're seeing more variety and experimentation with client devices than we've ever seen. Handhelds grab most of the headlines. (And some of the nascent trends around "Ultra-Mobile PCs" and "Mobile Internet Devices" are genuinely worthy of attention.) However, there's action on the desktop too. My research note delves into the background behind these trends in considerable depth but, in a nutshell, people are starting to wonder: "If most of my computing is out in the network cloud anyway, why is it that I need a big, noisy, hard-to-manage desktop PC?"
Dan Lyons over at Forbes.com reports on one of the latest desktop PC alternatives, from the Menlo Park-based Zonbu. It's a small box powered by a Via x86-compatible processor with 512MB of DRAM and 4GB of flash for storage. It runs a custom Linux distribution that comes packaged with Firefox, Skype, Open Office, Peer-to-Peer clients and lots of multimedia applications and games. The unit doesn't have any fans, something that leads the company to loudly trumpet its eco-friendliness--a laudable goal certainly, if one that's in danger of getting more than a bit overexposed these days.
With only a modicum of local storage, most user data will be stored out in the network. Zonbu has cut a deal with Amazon to use their S3 service. For $12.95 per month, you get up to 25GB of storage and free upgrades to newer versions of the operating system and applications. For $19.95, you get 100GB. 2GB of storage and software upgrades are free. The device itself is $249--but you can get $50 off for pre-paying for one year and $150 off for pre-paying for two. Put another way, for $371 you get the device and 25GB of storage for 2 years. You add your own keyboard, mouse, and monitor. Wireless requires a WiFi "dongle" connected to one of the USB ports (it comes with a standard 10/100Mbs wired Ethernet port.
I don't see this as a replacement for the main PC in most households--unless that PC really does just get used to check email, write the occasional letter, and download pictures. At the least, you'd need to accept that the device support (cameras, printers, etc.) is going to be skimpier than a Windows PC--although Linux has gotten much better in this regard and Zonbu appears to have put a great deal of work into documenting what devices do work. Furthermore, it's intended to just run the fixed set of delivered software although, presumably, the technically savvy could add applications or otherwise make changes to the base package.
However, this looks very interesting as a supplementary PC for children, for the kitchen, or for a second house. The biggest issue with having multiple PCs in the home isn't really the cost of the additional PCs; boxes are pretty cheap these days. Rather, it's keeping them all updated, backed-up, and virus-free. Nor do you especially want whirring fans in the same room where you're trying to watch TV. Viewed in that context, this looks very interesting. I wouldn't mind trying one myself.
Dell 1.0 was a religious company. I suppose you could refer to it instead as merely an intense focus on low costs in all matters of its operations, but it really went deeper than that. Low cost was an article of faith that was the deep guiding principle underlying essentially everything that the company did. Dell didn't merely tilt toward a streamlined supply chain and lean R&D, they were a fundamental part of what it was as a company.
This is not a pedantic distinction. Focus can be adjusted and tweaked; it's that much harder to change your core. Yet that's what Dell had to do. It had to respond to a world where "cheap boxes" was no longer the guiding mantra for server buyers, which made Michael Dell's public pronouncements suggesting that "Dell 2.0" was mostly about better execution so wrongheaded. I wrote about this back in February 2007 in a piece that also includes some choice commentary from Peter Capelli in Knowledge@Wharton:
So in this case, for example, Dell was the darling of many people in the business world because they had this model that seemed to work just incredibly well, and lots of people were copying it, and then the environment changed. It's not that they got bad at executing their model. At least I don't think that's the complaint. It is that the environment changed. They got different competitors who came in with different ideas and the playing field changed.
This makes a continuing set of moves that Dell has been making very significant. It's one thing to "be open" to new strategies, partnerships and approaches. It's another to actually act on them.
Perhaps the first major sign that real change was abrewin' was Dell's belated decision to introduce AMD server processors into its lineup alongside Intel. Although Intel has since gotten (seriously) back into the fight, at the time AMD had the clear technological lead and Dell's long refusal to offer AMD-based products seemed a willful decision to cede a pile of business to competitors without a fight. Backroom politics (however significant) aside, part of Dell's rationale was almost certainly a desire to avoid the incremental costs associated with designing, manufacturing and supporting servers based on processors from two different suppliers.
Second was the signs of genuine technical innovation in a company whose intellectual property was far more about business processes and supply chain optimization than the product itself. Dell won't be the only vendor offering servers with an embedded hypervisor that lets customers configure virtual machines out of the box without installing additional software. But it was involved early on with this technology approach under the name "Project Hybrid." Although Dell isn't, and won't be, an R&D powerhouse, it's clearly not content with always sitting on the sidelines while others roll out the initial iteration of some new technology or approach.
Finally, we have Dell's retail push. It started with a rather limited offering throughand Sam's Club. Now it's added . Thus, in yet another aspect of its business, Dell has apparently decided that a pure approach that takes minimal cost as its sole guiding principle--in this case Web-direct distribution--may have to be modified a bit if revenue is at stake.
None of this is to suggest that Dell has abandoned the Church of Frugality. Don't expect to see a Dell Labs that focuses on fundamental research or a major move into highly bespoke "Big Iron" servers. But we are seeing a Dell that is showing some flexibility on what were once all-or-nothing principles.
Today, "peer to peer" is inextricably linked to a variety of techniques for P2P file-sharing, whereby the recipients of a large file supply chunks of data to other recipients.
This distributes the load compared with everyone downloading a file from some central. For this and other reasons, P2P networks have proven popular for sharing MP3 music files although they're suitable for distributing any sizable digital content; for example, one also sees P2P employed to distribute Linux distributions, which can run into the gigabytes.
However, a few weeks ago I attended MIT Technology Review's EmTech07 Emerging Technologies Conference and attended a session where I was reminded that another "P2P" was once the subject of great buzz.
At the Fall 2000 Intel Developer Forum, outgoing Intel CEO Craig Barrett called peer-to-peer computing a "new wave which is going to have material impact on our industry." And he wasn't talking about file sharing.
Pat Gelsinger, who was Intel's CTO at the time, was even more enthusiastic in his keynote:
My subject for today is peer-to-peer--what we think is possibly the next computing frontier. Our agenda, we'll suggest, and hopefully by the end you'll agree with us, (is) that this is the revolution that could change computing as we know it.
P2P computing, as the term was popularized, was based on a pair of simple concepts: 1) There were lots of PCs sitting out there on desks doing nothing most of the time. (Laptops were far less ubiquitous in Y2K than today.) And 2) certain types of computing jobs could be broken down into a lot of small, distinct chunks. These generally fell into the realm of what's often called high-performance computing--tasks like looking at the different way molecular structures interact or fold.
Given those two facts, why not bring together the idle hardware and the computational need?
That's exactly what P2P computing did. There were a few efforts to use the technology for enterprise applications. Intel itself used P2P to power some of its chip design simulations. However, what really captured the public imagination was using distributed PCs in the homes of consumers or business desktops for causes like AIDS or other scientific research. The typical approach was to load the P2P application as a screen saver; when the computer was idle, it would start cranking the calculations, shipping them off to a central site as they completed.
SET@home was perhaps the canonical example. But there were many others such as United Devices, Entropia and Blackstone Computing.
At a February 2001 O'Reilly Conference on P2P Computing, there were 900 attendees. At the same conference, Larry Cheng of Battery Ventures estimated that there were more than 150 companies in P2P. There was even talk of monetizing the distributed computation like some form of electrical grid.
P2P computing never wholly went away; SETI@home remains an active project. Univa UD (formed by the merger of Univa and United Devices) has had some some success in pharma and finance (although it's less client-centric than United Devices' original vision).
But P2P, at least in the sense of harvesting excess client compute cycles, never amounted to something truly important, much less a revolution. There were security concerns and worries about the applications slowing PCs or hurting their reliability. One person was even prosecuted for running a P2P application on college computers. And, as much as anything, the whole thing just faded from being the cool flavor of the month.
Aspects of P2P computing live on. The basic concept that many computing jobs could be best handled by distributing them across large numbers of standardized building blocks was valid. In fact, it's the most common architecture for running all manner of of large-scale applications today from genomics to business intelligence. "Grid computing," a broad if variously defined set of technologies for harnessing and managing large compute clusters, shares common roots with P2P. In fact, The Grid by Foster and Kesselman was the bible of sorts for P2P computing.
But, as with so many other aspects of computation, the cycles are moving back to the data center. Perhaps we could summarize today's approach as being less about harvesting excess capacity on the periphery as not putting it out there in the first place.
The initial broad adoption of the Internet was, in major respects, about breaking down the boundaries of place and space. Important aspects of Web 2.0 concern themselves with reintroducing the local into the global. When I attended Mashup Camp at MIT earlier this year, I was struck by how much of the interest was around merging data with maps.
Thus, it's not particularly surprising that geotagging, associating photos with a map location, is a current hot topic. At the recent Web 2.0 Summit, Flickr debuted an upcoming revamp of its map page and a new "places" feature. (See screenshots and more here.) A couple of weeks ago I conducted my own geotagging experiment to see if I could merge GPS data with photos that I took during a hike (conclusion: yes, but you have to be a bit of a geek).
At the risk of stating the obvious, all photos are taken somewhere. Some, such as studio portraits, don't have location as a central characteristic. However, for many photos, location is key. And for some, such as pictures of real estate, location is arguably the defining characteristic.
Consequently, I expect that we're going to see hardware that makes it easier to record GPS information and integrate it with photographs. And a corresponding evolution of photosharing sites to simplify the storage and display of that geotagged data. This is good but it also carries some risks.
Now I'm not a tinfoil hat sort of guy.
There's a lot of information available about me through Google. You could probably even find out where I live without straining yourself terribly. None of this especially concerns me. But geotagging represents an explicit link between the virtual and the physical world. That's what makes it interesting--but also a bit worrying.
To be sure, we'll always have the ability to choose when and where we expose geotagged data. But that won't necessarily be simple.
For one thing, as geotagged data becomes more ubiquitous (and more of our lives go online in some form or another), more "leakage" is inevitable. You forget to set a privacy filter correctly. You don't know how to set a privacy filter. You didn't realize that the data had geospatial information.
And that assumes that you have control. What if someone else takes photos at your party that embed GPS data and uploads them to the public area of Flickr? (In an amusing twist, Flickr co-founder Stewart Butterfield reportedly asked people attending a party at his house recently not to geotag any photos they took.)
I can think of various features one could implement on a site like Flickr to mitigate the issue. But none are perfect and, in any case, that's only one site. Nor do I think a glib "privacy is dead" is a proper response either. Think of it as yet another to-do and to-think-about in the complicated merger of our private and professional, virtual and physical lives.
A couple of weeks back, Amazon.com announced an expansion of its Elastic Compute Cloud (EC2) service. The still-in-beta EC2 is a twist on the much-discussed, if rarely seen in the wild, compute utility whereby customers rent computing by virtual machine (VM)-hour; Amazon's EC2 infrastructure is based on a Xen hypervisor structure rather than running directly on physical hardware.
One implication of Amazon using VMs is that they can easily offer a variety of different VM sizes up to the size of the physical hardware. That was the most recent change announced. In addition to the default "Small Instance," users can now get "Large Instances" or "Extra Large Instances." These might be useful if, for example, you need to pair a heavyweight database instance with some lightweight Web services.
Another implication is that VM images, called Amazon Machine Images (AMI) in this case, can be archived and transported. This is analogous to VMware's virtual appliances. Amazon itself hasn't done much to jump-start an image marketplace at this point as VMware has. However, it does provide a mechanism for customers to post and publicly share AMIs and sees the opportunity for people to offer paid AMIs over time.
I bring this up because Emre Sokullo over at Read/Write Web has a post and table that does a great job of crystallizing why getting into Web services is such a big deal for Amazon. In short, Amazon's revenue is comparable to Google's. The difference is that, while Google is operating at a 29 percent profit margin, Amazon is under 2 percent. Which is probably about the best one can hope for with a big "mail order" retail operation.
Some may be wondering why Amazon is de-focusing and entering into something that is far from its DNA as an e-commerce service. To respond to that question, take a look at the table below, which compares some financial data of Internet bigcos:
|Company||Net Profit Margin (%)||2006 Annual Revenue ($M)||Market Capitalization ($B)|
I tend to use "Web services" to describe Amazon's offering, in part because Amazon also has a variety of pricing and other e-commerce products that fit more squarely into the "services" camp. However, another way to describe it is Hardware as a Service (HaaS), a term that seems to have been coined by Ed Byrne in 2006. Terminology aside, I agree with Ed that:
I think it will evolve into a H+SaaS [Hardware + Software as a Service] model where bundled solutions will be offered rather than just empty-shell machines. There's a business opportunity here for software companies to package and license their applications in the H+SaaS model, and charge on a per-user/per-domain basis.
We're already seeing this to a degree with Amazon's complementary S3 Storage as a Service model. For example, Jungle Disk offers data backup using Amazon's S3 as the backend.
To my mind, there's little question that more and more storage and computing will move out into the cloud over time. The question--well, one of them anyway--is where the economic scale points lie. In other words, will most software vendors find that it makes sense for them to deliver their own software as a service on their own hardware (i.e., the Salesforce.com model), or will they effectively subcontract out the datacenter infrastructure stuff to the likes of Amazon?
The answer to that particular question has broad implications for datacenter and system design. An IT world in which we have a small number of mega-datacenters (as Sun's Greg Papadopoulos has postulated) would be strikingly different from a world in which more software is delivered over the network but from a much larger number of sites more similar in scale to today.
Facebook banned someone for using a pseudonym and he's upset.
Anonymous speech has a long history in the United States going back to at least the Federalist Papers. And there are many good reasons, in addition to well-established case law, why anonymous speech should be protected.
That said, very little of such speech on the Internet falls into "Allowing dissenters to shield their identities frees them to express critical, minority views." (U.S. Supreme Court McIntyre v. Ohio Elections Commission, 1995). Instead, anonymity on the Internet often seems far more about protecting rudeness than protecting political dissent. Thus, I have little problem with a service such as Facebook attempting to ensure that its members are using real identities. (See this post by Dennis Howlett for a largely dissenting view.)
This case does, however, raise a variety of points about identity, privacy, and closed social platforms that are worth considering given that we'll see these issues and others like them again and again.
First, there's the question of "What is your identity?" The straightforward, if somewhat glib, response is that it's the name in your passport--i.e. your legal name. That seems to be Facebook's position. But what of people who write under a pseudonym? Or, more broadly, people who have chosen, for whatever reason, to consistently adopt a different identity or persona for their private and their public lives. Or for different aspects of their public lives.
This is all highly relevant whether we're discussing the need for separate personal and professional networks or even what constitutes an appropriate avatar when using virtual worlds for business purposes. It's not so much about absolute anonymity as such (and therefore the ability to say or do things without consequence) as having mechanisms to have multiple, consistent identities that allow one to wall off parts of one's life from each other.
A point perhaps difficult for some in the radical-transparency high-tech crowd on one of the coasts to appreciate is that not everyone is comfortable with throwing most everything in their personal and business lives together. (Expect these sorts of discussions to gain urgency as the Facebooked and MySpaced generation increasingly enters the world of business.)
Another aspect of this case is the whole question of walled gardens and data portability. Establishing a dependence on some company's product is nothing particularly new. Almost uncountable dollars and hours that have gone into training, developing applications, and purchasing software for Microsoft Windows. And there are many other, if less extreme, examples. (Indeed this dynamic underlies much of the ideological basis for open source.)
However, in the Web 2.0 world, we're seeing more and more of our data going into the hands of a third party as well. And, in the case of a service like Facebook, it's not just data in the sense of files or text but an entire web of connections and interactions that have evolved in an essentially emergent way. Issues such as these were no small part of the discussion at the last O'Reilly Open Source Conference (OSCON) last summer.
Google's OpenSocial API is one reaction to the current lack of social data portability, but the problem isn't an easy one. Whereas traditional data portability is fairly straightforward (documented file formats, etc.), what it even means to have a portable social network isn't especially clear.
One of the reasons that questions such as these have some importance is that network effects--Metcalfe's Law if you would--tend to drive things towards a smaller number of bigger players. Although there's some natural partitioning (social networks for children, for example), the evidence suggests that one or two big networks in a given domain tend to win dramatically. Check out the traffic stats for Flickr vs. Zoomr. Thus it's not as simple as picking up your ball and heading over to the next field.
Even if you could pick up your ball.
A couple of weeks ago, I was in Las Vegas for the Citrix iForum show. Citrix is best known for its Presentation Server product, nee MetaFrame. Presentation Server delivers specific business applications to remote desktops using Windows Terminal Server on the back-end. It's usually thought of in terms of thin client computing; in fact, the vast majority of Presentation Server installations deliver applications to ordinary PCs. (I describe the technology in more depth in this Illuminata research note.) However, these days, Citrix has many other products as well, variously tailored to delivering applications and full desktop images to a variety of clients.
I've been seeing more interest among IT folks in alternatives to traditional desktops over the past year since, well, ever. Traditional SMS-style provisioning and management systems never truly performed up to hopeful expectations; increasing concerns about security have only exacerbated an already sub-par situation. Nor are users thrilled with the current state of affairs. Their PCs tend to accumulate "cruft" (that's the technical term) over time and software loads "blow up" (another technical term) periodically. Furthermore, IT policies intended to keep things under some vague semblance of control tend to consist, in no small part, of long lists of "Thou shall nots" that limit what users can do with corporate PCs.
And, before the various fanboys chirp in, switching to Linux or a Mac doesn't make all these issues magically go away.
Products from Citrix and others (such as VMware's ACE) offer a variety of alternatives to a forced choice between a locked-down corporate desktop and an environment where anything goes. Largely orthogonal to these approaches from a technical perspective, but conceptually related, are rich internet applications (RIAs) that run within essentially any endpoint device that has a browser. Such applications underpin Software as a Service (SaaS), in which data and software exist largely in the "cloud" rather than in a user's PC or mobile client.
We've seen and heard a lot of praise for the democratic impulse associated with this particular phase of computing that often goes by the Web 2.0 moniker. Anyone can post. Anyone can publish. Anyone can photograph. Your vote matters in social media. And alternative ways of accessing and running applications have indeed made it easier to do things outside of a strict IT framework. In his closing iForum keynote Citrix CEO Mark Templeton used the phrase "Making the personal computer personal again" for this idea.
There's truth in this characterization, but the situation is far more complicated than distributed vs. centralized computing. In some respects, access is indeed more distributed--not only in the alternatives to tightly-controlled corporate desktops, but also to the myriad mobile devices that are woven more and more deeply into both personal and professional lives.
At the same time, the "cloud" is a new element and a new form of centralization. PCs (and, for that matter, Unix in the early days) was, for many, about distributing and maintaining control over data as well as access and computation. The applications that are increasingly central to the lives of many people today are much different. Data is centralized, not distributed, and often flows in but one direction: in. The real software intelligence is increasingly centralized as well. Delving into those topics deeper is a topic for another day. Suffice it to say that, while there's much to be said for widespread personal access, let's not confuse it with truly personal computing.
Perhaps it was in observance of Halloween, but whatever the reason there was something a bit ghostly about Intel's October 31 announcement of its latest Itanium processor.
You had to peer hard to catch even a glimpse of the Intel Itanium Processor 9100 announcement--formerly known under the "Montvale" code name. Neither Intel nor HP (which sells something like 90 percent of the Itaniums that go out Intel's doors) held briefings on the new processor iteration, and even simple press releases dribbled out belatedly. It's the sort of treatment usually reserved for announcements of new sales offices or CEO speeches at obscure conferences. I suppose that they could have made the announcement on a Saturday if they wanted to be even more wraithlike--but this was pretty close.
To be sure, this was a fairly modest bump. Montvale barely edges its "Montecito" predecessor in frequency (1.66GHz vs. 1.6GHz, or about 4 percent). More important is the 667MHz front-side bus (FSB), which gives about 25 percent faster memory access. Reliability ("core-level lock-step") and power efficiency ("demand-based switching") tweaks round out the new features. Bigger changes await the future quad-core "Tukwila," due late 2008 or so; it will also sport an integrated memory controller and new serial interconnect.
One almost gets the sense that Intel and HP hoped that if they soft-pedaled this announcement, no one would notice and therefore, the usual suspects wouldn't revel in the opportunity to engage in Itanium-mocking. Well, that didn't work.
Hacking Netflix ponders whether the "Death of Blockbuster" stories greatly exaggerate.
I hardly think we've seen the last of Blockbuster, but they do have a tough road ahead of them. Blockbuster Chairman Jim Keyes is just getting started, and he might have saved the company by pulling out of the expensive online war with Netflix. With Movie Gallery out of the way, refocusing on stores and getting more revenue (from) their 20 million monthly customers makes sense in the short term. Keep in mind that it's going to be a while before DVD goes away (and my Dad watches a movie online).
This latest round of the Blockbuster deathwatch was largely kicked off by Blockbuster's Q3 earning Webcast during which it was revealed that the company had lost about 500,000 Total Access (DVD by mail) subscribers. CEO James Keyes suggested that some were unprofitable subscribers, but then you'd probably expect him to say that. In any case, Blockbuster appears to be pulling back (but likely not exiting) from its mail operation to concentrate on its brick and mortar stores.
One often hears about B&M being dead or the DVD being replaced by online downloads. I don't buy either assertion, at least for any reasonable planning horizon. The reason is in the table below.
What the table shows is that the three styles of rental have distinct characteristics that inherently appeal to different groups of consumers or a given consumer in different circumstances.
If you just have to watch Spiderman 3 tonight, Netflix isn't going to cut it. On the other hand, downloading movies today requires a certain degree of tech savvy-ness and the appropriate hardware in your house--which may or may not be connected to your television set. So, there's something to be said for going down to the store for an impulse rental.
On the other hand, if you're mostly content to watch one of the movies that you happen to have on hand, as I am, disks by mail have a lot of otherwise nice characteristics--including, for now, probably the best selection for most purposes.
In the medium to longer term, however, I do believe that the relative cost to deliver movies in different ways is going to tend to drive home movie viewing more and more online. Although there are certainly (large) start-up costs to delivering movies over broadband, the infrastructure will get better and the costs will go lower over time.
This cost difference seems particularly relevant in something like movie rentals because all our experience to date suggests that, whatever the cost to deliver rentals, consumers are willing to pay about the same amount per movie. (Although there are certainly people who use the Netflix flat fee to rent large numbers of movies at a low per-movie fee, most people probably end up paying about the same $3 to $4 per film that they'd pay at their local rental store.)
Thus, the issue isn't so much whether a lot of folks would prefer to continue to have a B&M rental option (they would), but whether they're going to be willing to pay the costs. Especially as movie downloads start to chip away at the increasingly technically sophisticated user base that wants things right now.
That's Blockbuster's longer-term problem.
The broad strokes of Red Hat's announcement yesterday left a lot of canvas unpainted. Its JBoss middleware,, was MIA. And a great deal of management, provisioning, identity, etc. capabilities--essentially the services that span the entire infrastructure--were casually lumped under the Red Hat Network (RHN) umbrella, or handed off to Open APIs, without much in the way of detail. RHN is a capable update and monitoring tool that has become increasingly capable over time. But RHN, even augmented by Red Hat's other infrastructure products, hardly comprises a complete enterprise automation strategy, contrary to what the company seemed to suggest. Overall, it seemed more like a conceptual vision for a strategy than an actual strategy.
For me, more interesting for the near- to medium-term were a pair of other announcements that are more closely related than they might initially appear. One was the Red Hat Appliance Operating System (AOS) that the company plans to make available in the first half of 2008. (The acronym takes me back to my previous life...but that's another story.)
It goes almost without saying these days that the appliances in question are virtual ones. The idea is that you can take an app, the operating system it runs on, supporting programs, libraries, and what have you; configure the whole mess properly; and then write it out to disk ready to be fired up as a self-contained, ready-to-run virtual machine. Although the early use cases for virtual appliances were mostly around trials and demos, we're starting to see more and more interest in them as a general-purpose way of deploying software. (I previously discussed the evolution of virtual appliances in this piece.)
The company wasn't especially specific about exactly how AOS would differ from standard Red Hat Enterprise Linux, except to say that it was optimized for running on virtual infrastructures and would come with a software development kit (SDK) the construction of appliances and their integration with third-party software. Presumably Red Hat will leverage its existing Red Hat Exchange as part of the way these appliances would be distributed, but no details on that yet. The company did say that there would be tools in place to help ISVs update their own software in an appliance, but it wasn't ready to make any specific announcements about that yet.
VMware has run an aggressive play on virtual appliances. rPath has built an entire business around appliances. Perhaps an even more significant player is Oracle. Oracle Unbreakable Linux isn't an appliance as such. But it is an attempt to subsume the operating system with the application. With AOS--which Red Hat says will maintain all the software certifications associated with its Enterprise Linux product--the company is effectively arguing that the OS does matter, even in an appliance. Which, for an operating system vendor, is certainly a preferred state of affairs.
Another important announcement concerned making Red Hat Enterprise Linux available on Amazon's Elastic Compute Cloud (EC2) utility, which is currently in beta. At first blush, this would seem to be largely orthogonal to the appliance announcement. In fact, they have a lot in common. EC2 runs on a Xen-based virtual infrastructure; its virtual machines can be stored as Amazon Machine Images (AMI). Although Amazon hasn't yet done much around creating any sort of formal marketplace for AMIs (a la Red Hat Exchange), that wouldn't be a big leap. And,, I expect that we're going to see far more use of Amazon's style of utility computing to deliver software services rather than the raw hardware. Most users want to do things rather than run stuff.
One way to do this is a pure Software as a Service (SaaS) model whereby some vendor out in the cloud someplace may be using Amazon to host some storage or deliver some Web services but this is mostly transparent a user. However, it's also easy to imagine applications that are better delivered in a more traditional way (i.e. running on an operating system image that the user "owns"). In this case, virtual appliances offer one potential way to get those applications up and running in a way that mimics the way we're used to doing things on a physical server but with many of the fast setup characteristics of SaaS.
Hewlett-Packard has never done as much as it could to use its servers, PCs, printers, software, and the like to cross-leverage and complement each other.
One need only look to Apple to see how this sort of thing can work. The iPod would arguably not have succeeded without the Mac home base to build from, and the Mac has clearly piggybacked on the iPod's success. With even more assets, such as servers and services, HP had still more opportunities. But it largely paid lip service to connecting them. Indeed, at present, HP seems to be headed back to a more decentralized organization reminiscent of former CEO Lew Platt's tenure than the more centralized, top-down structure it adopted under Carly Fiorina.
However, at least outside its strictly business-oriented Technology and Solutions Group (where ProLiant and Integrity servers live, alongside HP's software and services businesses), there have been some cross-fertilization and synergies. HP combined its Imaging and Printing Group (cameras, printers, scanners) with the Personal Systems Group (PCs) in 2005. Although HP clearly favored the printing side of the equation, it also had products like cameras, scanners, and tablets that covered multiple points of digitization from image creation to hard-copy output.
Now comes the announcement that. Among the reasons given is enabling "HP to accelerate its investment in Print 2.0 initiatives," according to the company statement.
My initial reaction was that HP had become a bit too enamored of the margins associated with ink. And, as a result, it was backing away from products and technologies that are not, in themselves, as lucrative as printing but that clearly cross-support and leverage it in the same manner as the Mac and the iPod.
Print 2.0 relates, in no small part, to the mass Web 2.0 digitization of content. But HP sometimes seems too anxious to skip over anything that doesn't involve printing something out right now. For example, HP was actually fairly early to the online photo storage thing with Cartogra (now called Snapfish). But it was largely usurped by the more social-oriented sites such as Flickr. The difference can be striking; Snapfish periodically sends me e-mails threatening to delete my account unless I get something printed soon. Flickr is now augmenting its own printing services and can leverage a user base that dwarfs that of Snapfish.
To be sure, HP profits from many online services. HP Indigo printers are the output device of choice for many of the online book publishers such as Blurb. But by essentially taking on the role of arms merchant, rather than something more customer-facing, it cedes a lot of visibility and control of its destiny.
That said, it's hard to argue with HP's exit from the camera business.
For one thing, it largely reflects current reality. HP is already outsourcing much of its camera design work. Past digital camera-related R&D in HP Labs and its product groups notwithstanding, HP was already largely out of the camera business. Maybe HP coulda', woulda', shoulda' done better by its early digicam development, but it didn't--and there's not a lot of point wishing things were different.
Cameras are also a special class of device with their own long history and well-entrenched suppliers. Canon, for example, has been in the photo business since 1933 and has managed to not just maintain a presence in the camera market, but to actually accelerate its relative stature as a camera maker in the Digital Age.
Nikon hasn’t done badly either, although its greatest strengths are arguably in more traditional camera technologies such as optical design, whereas Canon has a clear lead in electronics design and manufacturing. Other manufacturers, such as Sony, Olympus, and Pentax are also in a better position than HP.
In short, HP is in such a laggardly position when it comes to cameras that it has effectively no hope of coming close to market leadership. Better to fold the tent and perhaps seek partnerships with companies that might be more amenable to such than if HP were an aggressive competitor.
There's a nasty little war afoot over the future of the operating system.
In one corner you have the operating system vendors.
They're building in virtualization, for example. This increases the depth of their software stack. The OS vendors present virtualization as a natural addition to existing operating system functions and a means to integrate an increasingly-common software capability.
That's fair enough. But it's also about control, especially in a world where owning the hypervisor gives you an advantage when up-selling to management layers and other value-add software in which there's real money to be made (as opposed to the raw hypervisor, which is becoming increasingly commoditized).
As, OS vendors are on the lookout to circumvent attempts to make their operating systems (and their brands) irrelevant. In Red Hat's case, it was to quash the efforts of software appliance makers to effectively make the OS just a supporting feature of the application.
In another corner, you have the application vendors and their fellow travelers.
Software as a service (SaaS) is one aspect of this war. Taken to its logical extreme, it may change the role of systems companies as well as operating system vendors. However, we don't need to look that far into possible futures to see the application vendor front in this war.
Take the appliance makers that Red Hat was taking on last week. Rpath CEO Billy Marshall writes: "Fortunately for all of us, 'certification' will be a thing of the past when applications companies distribute their applications as virtual appliances." It's not hard to see why Red Hat doesn't exactly cotton to this way of thinking. After all, certification is a very large part of what Red Hat sells. And the number of applications certified to run on Red Hat comprises a huge barrier to any other Linux vendor delivering its own flavor of "Enterprise Linux."
Oracle's Unbreakable Linux is a different take from a different angle, but the end result is the same. Its concept is based on the idea that, when you buy an application from Oracle, you also get some bits that let the application sit on top of the hardware and perform necessary tasks like talking to disk. Oracle has been subsuming operating system functions like memory and storage management for years; subsuming the whole operating system was just the next logical step.
So is its latest move, coming out with its own hypervisor based on technology from the widely-used Xen Project. (Xen is also the basis for the hypervisor in Novell and Red Hat Linux--as well as OS-independent products from XenSource/Citrix and Virtual Iron.)
Just as Oracle wants to minimize the role of the OS, so too does it want to minimize the role of the hypervisor (which, as I noted, itself threatens to reduce the role of the OS--got all that?). From the vantage of Redwood Shores, VMware is getting altogether too much attention. The easiest way to minimize the impact of the virtualization players? Offer Oracle's own hypervisor.
The biggest challenge that I see facing Oracle here is similar to that facing Unbreakable Linux and software appliances in general. There's an implicit assumption that people will be willing to have one virtualization for their boxes that run Oracle and another virtualization for everything else--that the maker of the hypervisor bits doesn't matter.
So far, there's scant evidence that people are willing to be quite so blase about their server virtualization. Furthermore, brand preferences aside, it remains early days for standards that handle the control and movement of virtual machines across virtual infrastructures sourced from different vendors. It's perhaps more thinkable that Oracle database and application servers might be kept independent from a general virtual infrastructure than would be the case with other, often less business-critical, applications. But, at least today, its still counter the overall trend of IT shops looking at server virtualization in strategic rather than machine-by-machine tactical ways.
As a result, I don't see this announcement having a broad near-term impact (as, indeed, Unbreakable Linux did not either, once the original raft of press stories and industry discussion died down). Rather, I see this as Oracle determined to keep making its statement, time and time again, that, someday, the operating system won't matter. That's Larry's story, and he's sticking with it.
This is a busy week--what with SC2007 in Reno, Oracle OpenWorld in San Francisco, and Microsoft TechEd EMEA in Barcelona. And that means lots of news crossing my desk.
One of today's most interesting tidbits came from Microsoft. Bob Kelly, corporate vice president for the company's server and tools business, announced Hyper-V:
This is the official name of the server virtualization technology within Windows Server 2008 that was previously code-named "Viridian." Microsoft also announced Hyper-V Server, a standalone hypervisor-based server virtualization product that complements the Hyper-V technology in Windows Server 2008 and allows customers to virtualize workloads onto a single physical server.
"So what?!" you say. Everybody and their dog is coming out with hypervisors that can be either purchased as standalone products or embedded into servers. Besides, Microsoft is very late to the virtualization game; its hypervisor won't even be in the initial release of Windows Server 2008.
That may all be so, but Microsoft has a huge footprint in datacenters--and even more in the IT installations of smaller companies. Thus, however tardy and reluctant Microsoft's arrival to virtualization may be (Virtual Server notwithstanding), its plans and presence matter.
That makes Microsoft's decision to offer a hypervisor that's not part of the operating system striking, given that they have been the most vocal proponents of the "virtualization as a feature of the OS" point of view. As Jim Allchin, who headed Microsoft's Platforms and Services Division until the beginning of this year put it: Windows already "virtualizes the CPU to give processing." In this sense, VMs just take that virtualization to the next level. And, in fact, there's a long history of operating systems subsuming functions and capabilities that were once commonly purchased as separate products. Think file systems, networking stacks, and thread libraries.
Built-in-ness is clearly the big argument in favor of marrying server virtualization to the operating system. You're buying the operating system anyway, so there's no need to buy a separate product from a third-party.
Of course, Microsoft wants to keep the operating system relevant to users however much Oracle and others would like to subsume it. Thus it's hardly a surprise that Microsoft wants functions in the OS both to control them and to enhance the value of its most strategic product.
But sometimes the world doesn't work the way you'd like it to.
Separate hypervisors are a better match for the sort of heterogeneous environments typically found in enterprises than are those built into OSs.
There's also a major trend afoot to embed hypervisors into x86 servers, just as they are already embedded into Big Iron. Among the early system vendors to announce or preview intentions in this area were Dell, HP, and IBM. Embedded hypervisors pretty much trump any integration advantage that virtualization-in-the-OS enjoys. You can't get much more built-in than firing virtualization up when you turn the server on for the first time.
I expect that this style of delivering the foundation of server virtualization is going to become commonplace.
It will be a while before who wrote a particular hypervisor becomes a genuine "don't care" to most users (the way BIOSs are today). Standards for managing and controlling virtual machines are still nascent and the whole area is far too new for true commoditization. But it's the direction things are headed. Even Microsoft, however reluctantly, has now accepted this even while it simultaneously tries to keep as much control over its own destiny as possible.
Those of us who have actually read through many of the Open Source licenses and have spent a fair bit of time mulling and discussing their consequences take a lot of things for granted.
One of those things is that once a program, or anything else, is released under an Open Source license you can't just take it back. Maybe this seems obvious to you, or maybe not, but it isn't to everyone. Perhaps especially as we depart the realm of software where most developers involved with Open Source have given at least passing thought to the implications of the GPL and other such licenses.
This was brought home to me the other week in this comment on Flickr by Lane Hartwell (username "fetching"). (The context isn't especially relevant to this discussion; I suggest reading the whole heated thread if you're really interested.) "[this discussion] has brought attention to some issues and may help change things on both ends. Who knew that CC Licenses were permanent? Flickr sure doesn't tell you when you choose that option."
There are a variety of of issues raised in this case, but the one I want to focus on is that a photographer initially posted a picture on Flickr under a Creative Commons license and subsequently changed its license to the default "All rights reserved" (i.e., any use beyond that allowed by Fair Use requires the explicit permission of the photographer). There is a family of Creative Commons licenses. They vary, essentially, in whether the licensed work can be altered and whether it can be used for commercial purposes. However, for our purposes here, we can just think of all of them as "Open Source licenses."
Physical world intuition might suggest that of course the copyright holder, the owner of the property in a sense, can unshare a work anytime he or she chooses. If I give you permission to borrow my car, I can certainly give you permission on a one-time basis or can withdraw that permission at any time (subject to any contractual agreements).
But Open Source licenses are different. Once I put a photograph, a novel, or a program out in the world under an Open Source license, it's out there. I can't go "never mind" and withdraw whatever rights the license granted in the first place.
I'm not saying that the copyright owner can't change the license. In the case of works to which multiple people have contributed, there are a variety of complications and legal theories around changing licenses, but that's a separate issue. The bits or the words or the arrangement of ink droplets that have already been released into the world remain covered by the Open Source license they were originally released under.
A Mattel court case involving their CyberPatrol software and a program by Eddy Jahnsson and Matthew Skala called cphack raised the issue of whether a GPL license could be withdrawn. However, the case was such that no definitive legal conclusion came about. In addition, there were questions over whether cphack was even properly licensed under the GPL.
In any case, the widespread opinion among those who work with Open Source licenses is that what's been released into the world can't be subsequently withdrawn. As stated in this FreeBSD document:
No license can guarantee future software availability. Although a copyright holder can traditionally change the terms of a copyright at anytime, the presumption in the BSD community is that such an attempt simply causes the source to fork.
In other words, if the license is changed to an "unfree" license, you don't get the right to enjoy any downstream changes--whether enhancements to a software program or touchups to a photograph. But the specific work that's been released to the world can't be withdrawn.
Back when I was writing software for PCs, it was pretty common to see licenses offering some program free "for noncommercial use" or some similar wording. The basic idea was that if you got people using some application at home, maybe they'd want to use it at work too--and then they'd buy a commercial license. Besides, very few of those home users were about to send you a check anyway. It's a little bit like using an open-source business model to build volume and awareness with free, unsupported software and then make money from support contracts when a company wants to put the software into production.
There's a difference though.
No widely used open-source software license that I know of makes a distinction about how the software is going to be used. Rather, open-source licenses concern themselves with essentially technical details about how code is combined with other code and what the resulting obligations are with respect to making code changes and enhancements available to the community. But none of the major open-source software licenses restrict use to schools or personal PCs or anything like that. (One could argue that the new GPLv3 license's clauses concerning digital rights management come close to being a sort of usage-based restriction. That's one of the reasons that Linus Torvalds hasn't been a big fan of GPLv3.)
This is probably a good thing. Especially in today's world of interlocking personal and professional lives, defining where "noncommercial use" begins and ends can get extraordinarily tricky.
This was brought home to me last week while putting together a presentation that uses some photographs posted on Flickr.
By way of background, I was searching for photos licensed under Creative Commons--a sort of counterpart to open-source software licenses that is intended to apply to things like books, videos, photographs, and so forth. There are a variety of Creative Commons licenses worldwide (e.g. these are the choices offered on Flickr), but for our purposes here, one important distinction is between the licenses that allow commercial use and those that do not. A noncommercial license means: "You let others copy, distribute, display, and perform your work--and derivative works based upon it--but for noncommercial purposes only."
At first blush, this seems intuitively fair and reasonable. Many of my own photographs on Flickr are licensed under a noncommercial Creative Commons license. It just feels right. Sure, you can use one of my photos on your Web site (with proper attribution, as required). But I can't say that I'd be especially thrilled to learn that someone was off hawking my pics on a microstock site or selling posters without giving me anything back. Thus I, like many, chose a noncommercial license.
But start squinting hard at the line that separates commercial from noncommercial and it starts to get fuzzy in a hurry. Consider the following questions. Are any of these uses truly noncommercial?
What if I have some AdSense advertising on my Web page or blog?
What if I actually make "real" money from AdSense?
What if I put together an entire ad-supported Web site using noncommercial photos?
What if I use the photo in an internal company presentation? (All companies are commercial enterprises, after all.)
What if I'm using those photos as "incidental" illustrative content in a presentation I'm being paid to give? (This was my case.)
What if I print a book of these photos but only charge my cost? What if I cover my time at some nominal rate as well?
And so forth.
This isn't a new question. I did find a discussion draft of noncommercial guidelines, but for the most part it seems a dangerously ill-defined question in an environment where individuals have so many opportunities to micro-commercialize. Sure, the average blog's AdSense weekly revenues won't buy a cup of coffee but that's a difference of degree and not kind from someone who makes $100 a week or $1,000.
I suspect that noncommercial Creative Commons exists because it appeals to an innate sense of fairness. As such, people who wouldn't license under a broader Creative Commons license will use this one. In short, noncommercial Creative Commons is convenient. That doesn't make it necessarily good.
(By the way, I concluded that I would probably have been OK using noncommercial-licensed photos because they were incidental to the topic that I was presenting. However, to be on the safe side, I stuck with photos that were explicitly licensed for commercial use.)
The idea of the "long tail," a concept popularized by Wired's Chris Anderson, permeates much of what is going on with the evolution of IT.
After all, it's the mass participation of almost everyone in creating content of various types that's driving an enormous amount of IT build out--which, in turn, may well change even. Simply put, the long-tail premise is that bestsellers aren't in the majority when one tallies up the sales at Amazon.com or the page views on blogs. Rather, it's the total of the far more numerous other 80 or 90 percent of content.
Less abstractly, Anderson's argument is about business. Namely, he argues companies can make money selling to the long tail as shown in the data that I discuss in this 2005 post. I thought and think that it's a powerful concept--although I also think it fair to ponder how many companies are truly well-positioned to make money from the long tail.
When Amazon, Netflix, and Google make their appearance as exemplars for the umpteenth time, one starts to wonder. (In all fairness, Anderson has additional examples; Amazon and Netflix just make particularly rich, data-heavy case studies.)
However, as well noted by Alex Iskold over at Read/Write Web this morning, there's a slightly subtle, but very important, distinction to be made when we're discussing making money on the long tail. It's about making money on the long tail, not making money in it.
According to Iskold:
The precise point of Anderson's argument is that it is a collective of the long tail amounts to substantial dollars because the volume is there. The retail/advertising game is a game based on volume. You make money on a lot of traffic to a single popular site or the sum of smaller amounts of traffic to many less popular sites.
Or, as NatC in the comments below Iskold's blog reformulates it:
Amazon can make money from the long tail, while authors of 'minor' books won't. In the same way, Google makes money from the blogosphere's long tail, but small blogs don't.
Iskold then goes onto ponder the longer-term implications. Will bloggers drop out when they find out that no one's reading them?
Here, I think he's on less solid ground.
Authors and musicians wrote "minor" books and songs that were remaindered...long before there was the idea of the long tail. Today's discovery and recommendation systems could doubtless stand much improvement--which makes efforts like the Netflix Prize and Paul Lamere's Search Inside the Music project at Sun Labs so interesting. Nonetheless, by any reasonable measure, the ability of consumers to discover long-tail content is far better than it's ever been in history.
And, let's be honest, creating that long-tail content has never been primarily about making money.
A few days back,commercial from noncommercial usage with respect to the Creative Commons license.
There's an ongoing legal case that concerns another aspect of Creative Commons commerciality. As Josh Wolf describes the original story:
On April 21, 2007, during a church camp, Chang's counselor snapped a photo of her and uploaded it to his Flickr account. He published the photo under a CC-BY-2.0 license, which allows for commercial use of the photo without obtaining permission from the copyright owner.
In less than two months, the photo had been cropped and repurposed to promote Virgin Mobile in Australia.
Upon learning of the ad, Chang wrote on a Flickr page, "hey that's me! no joke. i think i'm being insulted...can you tell me where this was taken." Underneath Chang's comment, there is a note from the original photographer: "where was this? do you think virgin mobile will give me stuff?"
It's unclear whether Virgin coughed up any loot, but Chang's family has taken legal action against the company for not obtaining proper permission for the use of her likeness.
The basic legal problem here is that, although the photographer gave his permission for Virgin Mobile Australia (or anyone else) to use the photograph for commercial purposes (with attribution), that doesn't mean that all the rights were cleared to use the photo in an advertisement. A stock photo--which is essentially how Virgin Mobile Australia was using the image--typically requires model releases from any identifiable person. Releases may also be needed for photographed property under some circumstances. Identifiable trademarks and the like can also be an issue.
It seems a rather fundamental error on Virgin Mobile Australia's (and even more so their ad agency's part). I guess they just assumed that the Creative Commons photo was like an ordinary stock photo where someone had taken care of clearing all the rights.
But as Larry Lessig says--with the dropping of Creative Commons itself from the suit:
As I said when I announced the lawsuit here, the fact that the laws of the United States don't make us liable for the misuse in this context doesn't mean that we're not working extremely hard to make sure misuse doesn't happen. It is always a problem (even if not a legal problem) when someone doesn't understand what our licenses do, or how they work.
The intent of Creative Commons is that the photographer (I'll stick to photography here) can give his permission for commercial, or noncommercial, entities to use his or her work without compensation. It is not, however, intended to be a representation that all the commercial rights to use the photograph in any context have been cleared. In fact, with Creative Commons licenses that permit modification of the final work it's hard to see how it would even be possible to certify in advance that any possible use was permitted under all laws anywhere in the world. And even stock sites place a variety of restrictions on the final use.
This seemed a fairly obvious point to me. But as I read stories and comments in this case, it seems that a lot of people assume that licensing a photo for commercial use under Creative Commons is, in fact, warranting it as unconditionally appropriate for commercial use rather than merely giving a narrower set of permissions strictly from the photographer's perspective.
James Robertson over at Smalltalk Tidbits, Industry Rants writes:
The RIAA (and the MPAA, for that matter) are fighting a war they can't win. They are busily irritating their real and potential customers--either suing them, or making life difficult for them--while the real pirates sail along unimpaired. The amount of inertia in that business is astonishing--the good times for all the do-nothing middlemen are over, and it's time for the labels to accept that fact and get on with their lives.
I don't bring this up because I want to replow the well-worked ground of the out-of-touch content industries, but because Robertson highlights a fundamental point about today's business world. Historically, a lot of companies and people made boatloads of money acting as intermediaries without adding much in the way of value.
I see this in my own industry. When I worked for a system vendor in pre-Web days, we subscribed to the services of one industry analyst firm whose main business was essentially collecting product data sheets from everyone and faxing them, on request, to subscribers. Thus, for example, if I was getting ready for a product announcement and needed information on the competing Digital or Wang systems, I'd call up this firm and ask for any info it had on X, Y, and Z products. The firm would send it to me without anything in the way of commentary or other color. But given that I could hardly call up the Digital or Wang sales office and request this info myself, it was still a useful service.
Of course, that this was actually a business once seems almost laughable today. (And, in fact, it was even worse than I described. Not only did we have to subscribe to a service to get this information, but we had to subscribe to micro-sliced technology segments such as midrange systems or workstations.)
That's not to say that there isn't still money to be made in establishing connections and filtering data. But it's worth remembering that--for the most part--it's now about the direct value provided by those services rather than just charging a gatekeeper fee for using a magic key to unlock some basic data.
Over the past couple of days, I've read a couple of great pieces about the digital delivery of written content.
Tim O'Reilly mines his own data and experiences to talk about the economics of e-books. Scott Karp at Publishing 2.0 follows up with "The Future of Print Publishing and Paid Content," in which he considers what people are paying for or what they think they're paying for when they buy a newspaper:
For many people who paid for print publications, including newspapers, magazines, and books, a significant part of the value was in the distribution. That DOESN'T mean people don't value the content anymore. It means that the value of having it delivered to their doorstep every morning, or having it show up in their mailbox, or carrying it with them on a plane--in print--has CHANGED because of the availability of digital distribution as an alternative.
The problem for people who sell printed content is that the value of the distribution and the value of the content itself was always deeply intertwined--now it's separable.
People ARE willing to pay for certain digital content, but they AREN'T willing to pay for the distribution--specifically, not the analogue distribution premium.
I think he's spot on. In fact, I might go a little further.
We're largely talking subconscious mental math here, so I don't claim this to be an exact analysis. But I'm going to posit that most people act as if the following are true:
There's some truth in these generalizations, but my guess is that they're not as true as most people think.
There are these costs: marketing, editing, publisher profit, the money to cover everything that isn't a bestseller, etc. These things aren't distribution, but they really aren't content value either. So they tend to get lumped with "not content value" that doesn't need to get paid for in a digital world. However, much of it does need to get paid for.
At the same time, when I last looked, big server infrastructures didn't grow on trees and neither did the bandwidth and the people needed to make use of them.
Implicit assumptions that digital distribution is essentially free are commonplace. Those would be wrong. It may be cheaper depending upon the details of what type of content we're talking about exactly. (Video demands more infrastructure and bandwidth than books, for example.) But free? Nope.
A few weeks back,to illustrate how "Timing Is Everything." Given that the picture in question could hardly have been taken with a Sony digital camera (which wouldn't exist for decades), I thought it a poor choice to illustrate the technical prowess of Sony's latest digital SLR.
After I wrote the original post, I noticed something else when I was studying the original photograph and the one in the ad; they weren't quite the same. I thought it a slightly amusing oddity but not much more. The differences were fairly clear if you looked at a blowup, but they were fairly subtle at more modest sizes.
In any case, while perusing Time Magazine last week I ran across another ad in the series that caused me to go "Huh?" (I think it was a bit saltier than that but you get the idea.) The ad in Time was clearly intended to show an example of bad timing.
In fact, that's exactly the point of the whole series of ads created for this campaign by BBDO New York. (A third ad is here.) As MediaPost says:
Timing is everything, especially when you're taking pictures. If you've ever wondered what a famous photo would look like had it been taken a second or two later, then you're bound to enjoy this print campaign for Sony's Alpha DSLR-A700 camera. Imagine the construction workers eating lunch atop a steel beam while others were still working. Or a leopard readying to attack a baboon. What would happen if a referee stood in the way of Brandi Chastain's winning penalty kick and striptease?
On the one hand I feel a little silly. I badly missed the point of the ad.
Having said that, I have to give BBDO New York a 2 out of 3 for this campaign. The construction workers and the Brandi Chastain shots are clear examples of bad timing. They're witty ads and unambiguously make their point.
The leopard and baboon, however? It's not as good as the one that Life originally published. But the differences are slight. And, in spite of one or two comments made to my original post, I don't see how anyone could call the shot used for the ad a bad or badly-timed photograph in an absolute sense. I won't argue that everyone else should share my aesthetic opinion but I'm confident that had I taken that picture, I'd have a big enlargement hanging on my wall. And I doubt that I'm alone in that.
So I stand by my opinion that it was a poor choice for this ad--just for different reasons than I initially thought.
(P.S. I don't know if the ads used in the campaign are different shots in the original sequences or if they are Photoshopped versions of the original "good" photographs. I was initially somewhat puzzled when I carefully studied the two Dominis shots (the Life version and the Sony ad version)--because there seemed to be more differences on the baboon side than on the leopard side. Nothing conclusive, but it didn't look quite right to be two shots in the sequence even if I did try to convince myself that the mechanics worked. At the time, it just made no sense to me that someone would have digitally manipulated the photo to make it worse. See also the discussion in the comments. Now, of course, knowing that the whole idea was to have "bad" versions of iconic photographs, deliberately degrading part of the picture makes perfect sense.)
I think that Alfresco's Matt Asay and Iabout the current spate of lawsuits that BusyBox and the Software Freedom Law Center (SFLC) have been busily filing. On the one hand, open-source developers have to protect their rights. However, as Matt notes:
My primary concern is that this (and the other two ongoing BusyBox lawsuits) will create more misunderstanding about the requirements the GPL imposes. It won't be helpful to have this result in less GPL-licensed software being adopted.
Put another way: if using GPL software comes to be seen as an invitation to get sued, fewer people will use GPL software. Whether individual enforcement actions can be justified isn't really the point. It's whether, collectively, copyleft-style licenses (including the GPL) start to look more legally risky than beneficial. CNET.com's Stephen Shankland took a look at SFLC's increasingly hard-line approach to license enforcement in "GPL defenders say: See you in court."
The latest BusyBox lawsuit against Verizon seems especially problematic.
Essentially, Verizon distributes Actiontec MI424WR routers to its customers for its Fios Internet service. It's unclear to me whether the device is sold or loaned or given away as part of the service; perhaps it's different by geographical location as I've heard conflicting experiences. In any case, the router is an integral part of Verizon's service. The routers use firmware that contains a variety of GPL software including BusyBox, a set of small versions of many common Unix utilities combined into a single executable.
The crux of the complaint is that Verizon allows its customers to download the router firmware from Verizon. Thus, although Actiontec apparently provides source code as required by the GPL on its own Web site, Verizon does not. It's also unclear whether the Verizon firmware is identical to Actiontec's and, if there are differences, whether they are relevant to the GPL or BusyBox. Regardless, the complaint focuses on the fact that Verizon offers firmware binaries for download without offering the corresponding source code; it makes no mention of Verizon distributing the binaries for a unique version of BusyBox.
Because Verizon is distributing firmware binaries without an offer of source code, this would, in fact, appear to be a violation of the GPL.
But it seems a rather picayune and hyper-technical one--especially if the Verizon firmware uses the same BusyBox code that is already available on the manufacturer's Web site.
It seems only a small step from this case to others that would raise some real concerns about using open source. I'm not a lawyer and draw no conclusion about whether these different sets of facts could trigger GPL violations or not. I merely note that they're close enough to the Verizon case that fine legal parsing would be needed to distinguish them.
To my non-lawyer (but reasonably open source-educated) eyes, these cases aren't clearly unique from that of Verizon. And, if they can't be clearly distinguished, they suggest scenarios that would be troubling to a lot of vendors making use of GPL software.
(Throughout this post, I've used the generic term "GPL." The new iteration of the license, GPLv3, includes some specific language related to end-user hardware devices that may or may not be relevant in this context. In any case, the reality is that the vast bulk of existing GPL software still uses the GPLv2 version.)
There was a time when Advanced Micro Devices was on a roll, and really seemed to have Intel's number--especially in the server space.
AMD's Opteron processor represented a significant advance in x86 processor design, causing Intel no end of headaches. More than any other single reason, Opteron is what forced Intel to largely rototill its product roadmap a couple of years back in order to switch its focus from frequency to multicore designs sooner than it had intended. For that matter, Intel may well have never added 64-bit extensions to its x86 processors had AMD not done so first. (Intel's plan and preference was for customers requiring the larger memory capacities allowed by 64-bit addressing to adopt Itanium processors instead.)
But then throughout 2007, Intel was seemingly hitting on all cylinders. It came into the year propelled by its"Woodcrest" Xeon processor, based on the new Core microarchitecture, and "Clovertown", the first x86 quad-core design. In September, it rolled out "Tigerton" (Xeon 7300) for four-socket servers and capped the year with the introduction of "Penryn," a new design that's the first to use Intel's 45-nanometer manufacturing process.
For its part, AMD turned in mostly poor financial results and had problems rolling out its new "Barcelona" quad-core processor. Its recent financial analysts day could not have been much fun for company execs. At the last minute, the company canceled an event for industry analysts a few weeks prior. Based on AMD's discussions and disclosures at its financial analysts day--as well as other discussions and happenings over the past year--here are some of my thoughts on where AMD stands today.
Barcelona was not just late, but disappointing. The two things go somewhat hand-in-hand of course. In the Moore's Law-driven processor business, even the most extraordinary or cleverly designed products aren't nearly as interesting six months or a year later. Even before the latest round of Barcelona delays, the announced product was clearly not the game-changer that AMD had suggested it would be. That's not to say that it doesn't perform reasonably and even have areas of particular performance strength (especially floating point and virtualization), but when you set the expectation of a home run and end up poking a single, people are bound to be disappointed.
Intel is doing things right. x86 processor sales are perhaps not quite a zero-sum game. Innovation and advances encourage sales and upgrades that wouldn't happen were they not present. However, there are still plenty of cases in which someone has decided to purchase an x86 server; they'll evaluate the options and make their selection. In this case, what matters isn't so much how absolutely good the Intel or AMD product is, but how they stack up relative to each other. Thus, when Intel was faltering, AMD's advantage derived both from its own good execution and Intel's bad execution. AMD hasn't done everything right over the past year, but a big part of their problem is that Intel hasn't done much wrong.
AMD's routes to market are stronger than they used to be. This is one area where AMD has continued to improve its position, even as its product advantages have shrunk. In 2000, AMD processors were designed into a single HP notebook. Around that same time I conducted a series of interviews with ISVs, OEMs, and end users to look into how they viewed the AMD brand relative to Intel. Bottom line? No one preferred AMD, and the vast majority strongly preferred Intel. Even in 2003, when AMD announced the much-anticipated Opteron at the Hudson Theater in New York, IBM was the only Tier 1 OEM on stage. The Barcelona launch included representatives of all the Tier 1 companies. And AMD has been gaining design wins in the client space as well. In short, to the degree that AMD can deliver competitive products, it has far more and far better avenues to actually sell them than it once had.
AMD is shifting its emphasis from server CPU performance to a view that's more about "platform" performance and functionality, on clients as well as servers. Specifically, Opteron performance has clearly been the tip of AMD's arrow. With Intel ramping up its 45-nm process, my take is that AMD recognizes that it will (at best) be just able to play second fiddle if it runs basically the same plays as Intel does. Some of this is about leveraging its ATI assets and integrating graphics processing units for "stream computing" as well as for virtualization. It's also about trying to find and exploit market segments where Intel may not be as focused. None of this is unreasonable, although the full realization of "Fusion" (the integration of GPUs on the processor) is near the end of the decade and Intel is also going after new market areas such as ultramobile PCs.
AMD had a good run when Intel was more or less sleeping. AMD took that opportunity to, among other things, establish itself as a mainstream supplier for enterprises and others. That's the good news. The bad news is that its current products don't offer an especially compelling reason for people to buy them.
I've held off posting about the whole Lane Hartwell, Richter Scales, "Here Comes Another Bubble" brouhaha. I've done so, in no small part, because my own feelings on the topic are...complicated.
On the one hand, I generally favor people sharing their creative output to the degree that it's economically feasible to do so. Our culture is richer and more interesting for the widespread tearing down of walled gardens.
Just to be clear, I'm not advocating some parodic version of free culture in which any content that can be grabbed should be grabbed and there's nothing anyone can do about it. Rather, I'm just suggesting that rigid and unrelenting copyright enforcement for even relatively minor infractions doesn't make me terribly comfortable. (I understand that, for Lane Hartwell, the Richter Scales' use of her photo was a sort of "straw that broke the camel's back" because of past use of her pictures without permission.)
On the other hand, as I've read through some of the commentary and comments in this case, I've gotten a bit irritated. This comment is fairly typical: "A photo of some grinning geek is not protected art. It's a commodity, such as a phone number or the atomic weight of carbon." In other words: eh, it's only a photograph. What's the big deal? As a sometimes photographer, I can't tell you how many times I've run into a similar attitude, even from writers who would have plenty to say if you grabbed a piece they had written and "repurposed" it.
There's also been a great deal of poorly informed commentary about Fair Use. Jason Schultz at LawGeek gives the best rundown of the legal issues in this case that I've seen. I'm not sure, based on a lot of discussions and reading about copyright law in the past, that I agree with his ultimate conclusion (that the use of the photo was probably Fair Use). In any case, as he says, it's a close call in an area of copyright law that is notoriously squishy and very dependent on the specific facts in a given instance. So, if you want to read up on the legal issues involved, I defer to Jason's post.
However, in my view, Jason's most important point has nothing to do with the law.
I'm no Internet ethicist, of course, so I can't really say what the proper ethical outcome should be for this or other similar situations. However, for me, the idea of attribution and promotion have strong appeal. They respect who the artist is and try to help them thrive in their work. I also think ethical online users should consider tithing any financial gain from the use of other people's works back to the original creator--in essence voluntarily offer to post-date royalties if the project amounts to anything profitable. Such steps would, IMO, go a long way to building a stronger online creative community rather than tearing it down or apart.
There are, of course, cases where misappropriation of posted material isn't going to be remedied by adding a photo caption or a byline, but it's often all anyone is looking for. I have no idea whether that would have been sufficient in this particular case or not, but for a lot of us, getting the proper credit is mostly what we're looking for.
The Richter Scales have reposted their "Here Comes Another Bubble" video sans the much-disputed Lane Hartwell photograph of Owen Thomas that they used in the original video without permission and without attribution. Lane has also made a statement:
As the Richter Scales stated in their blog, the video that used my image--without my permission--was viewed just under one million times on YouTube. In the end, the band opted not to work with me toward a fair resolution of the issue. I have to say that I'm very disappointed with the members of the band I negotiated with in good faith.
Lane goes on to say:
I will be sending the band an invoice for their use of my image in the first version of the video. I hope they pay it as I'll use the money to pay my lawyer and donate the rest to KidsWithCameras.org. Kids with Cameras is a nonprofit organization that teaches the art of photography to marginalized children in communities around the world. This was the offer I proposed to the Richter Scales that they chose to disregard.
Thus, it doesn't appear that, in this particular case, attribution in the original video would have put a stop to this controversy before it began. Perhaps if the band had asked in advance. I don't know. When people have requested to use my photographs in a book and, in one case, a PBS documentary I've always said yes for the price of a photo credit. But that's me. And I'm not a professional photographer with a history of having her photos and those of her friends ripped off.
Jonathan at Plagiarism Today has a great recap of the entire imbroglio. Among his lessons learned:
Attribute obsessively: If you use other people's content in any way, attribute, attribute well, and attribute graciously. It is best to follow industry standards here and to start out with the intention of doing so rather than having to go back and do it later, when it is much harder.
Remain calm: When emotions get involved, as they often do with content theft and plagiarism issues, it is easy to lose sight of how important a case really is. Some are more important than they seem, others are less. This case was the latter. It is important to focus less on feelings and more on legal issues and how a case of plagiarism can potentially help or hurt you.
As I noted yesterday, my own feelings were pretty conflicted about this tempest. Lane's DMCA takedown notice that bumped the original video off YouTube seemed somewhat disproportionate to me. On the other hand, the Richter Scales largely hid behind a Fair Use copyright defense. Leaving aside whether Fair Use applied here (it's at best a borderline case); it's just bad manners and bad practice to not give attribution to all the people whose work the group used--as they have now done in the revised video. This case--and many others like it--is far more about proper societal behavior than it is about the nuances of copyright law.
As "Miss Rogue" writes in "Tragedy of the Commons: Lane Hartwell vs Richter Scales:
Since the video was viewed hundreds of thousands of times (prior to takedown), there was a missed opportunity there for the many photographers whose photos were used to make this group famous. In a post titled Credit and "Here Comes Another Bubble", the author explains:
"We did make an effort to credit those people we actively worked with on the video, as well as Billy Joel, which we listed in the comments on YouTube and on our blog. But, given the large number of sources we used, the task of assigning credit for each source seemed impractical."
He goes on to mention Lane Hartwell...without linking to her photos or her Web site. As one commenter said, "Basically if I am reading your post correct, what I hear you saying is, 'Mea Culpa, but we're lazy.'" In actuality, the time one can take to list the photo credits is a fraction of the time it would take to go out and duplicate the work of those artists to make the same presentation.
I'm unsure what good will come out of this whole incident. The problem is that when emotions run high, as they did here, people tend to spend more time fortifying their own positions rather than exploring new ones. However, I can at least hope that it's at least raised a little bit the general awareness around giving proper credit for images and other material from the Web.
IBM's last major Systems and Technology Group (STG) reorganization in 2000 both put an exclamation group on and added momentum to the company's resurgence. IBM described the introduction of the umbrella "eServer" brand atop all of its server product lines as:
a product of Project Mach 1, a major cross-company initiative begun three years ago to harness the company's best technologies and practices to support the infrastructure for the next phase of e-business. From the consolidation of IBM server manufacturing and development, to the realignment of its sales force, to breakthroughs such as copper chips, Silicon-on-Insulator and Memory eXtension Technology, to partnerships with leading software vendors, to IBM's corporate-wide embrace of Linux--every corner of IBM moved closer to today's launch of the IBM eServer.
This unification stood in stark contrast to the past norm in which product groups in Poughkeepsie, N.Y., Austin,Texas, Raleigh, N.C., and Rochester, N.Y., often seemed more like warring fiefdoms than different faces of a single integrated company. True, the various product lines--xSeries, zSeries, iSeries, and pSeries--maintained distinct (if sometimes uncomfortably overlapping) identities within the eServer scheme. Nonetheless, by historical IBM standards, eServer under Bill Zeitler looked like a big and (mostly) happy family.
This latest reorg can perhaps be best thought of as taking the next big step toward shifting away from a structure based on technology and product line distinctions, and toward a structure along the lines of distinct customer segments.
The new STG organization retains groups, led by general managers, oriented around the various product lines: Mainframe Platform (System z), Power Platform (System i and p), Modular Business Platform (System x and BladeCenter), and Storage Platform. However, these are now entities mostly concerned with managing product development and rollout. Go-to-market activities, including sales, now reside in four other groups:
IBM's justification for these latest changes is that customers want to see one face of IBM. They don't want to see System x and System p sales reps each offering whatever parochial solution is better for their particular comp plan. Even if the customer doesn't mind all that much, IBM is doubtless not a big fan of customers playing IBM divisions and sales reps against each other to get the best price. The customer segment approach is a play that IBM's run before--indeed, run quite successfully. There were good reasons for getting away from it in years past, but there seem to be good reasons to get back to it now.
That said, any organizational scheme has its trade-offs. Product-centric alignments are in some sense more natural, or at least simpler. They match up with the underlying technologies. This makes it easier, for example, for sales teams to become experts on particular product sets--without more complicated overlay sales specialist schemes. A product orientation also means that you're likely to see more effort go into pushing products into new areas where they aren't necessarily the most natural fit; Linux on POWER is one example from IBM's playbook. Although some force-fitting may well be wasted energy, such initiatives can also open up new markets for a company.
At the end of the day, it may even be reasonable to ask how much the specific organizational structure ultimately matters. Some mappings may make more or less sense at a given point in history for a given company, but I've seen so many cycles and fashions over the years that I have some trouble accepting that any one approach is ideal. Perhaps it's more important to occasionally shake things up and avoid complacency than it is to lay out any particular form of organizational chart.
The Forbes.com headline writers who wrote "Sun Plans To Close Its Data Centers" rather overstates Andy Greenberg's actual story.
In an interview, Sun Microsystems' chief technology officer of information technology, John Dutra, balked at committing to the 2015 goal, and cautioned that Cinque's post was more of a "vision" than a "tactical plan." But Sun's drive to reduce its in-house computing hardware is real. In five years, Dutra says, more efficient servers and virtualization--the conversion of multiple computers into software that can be run on a single machine--will allow Sun to do away with five of its eight data centers, reducing both the centers' square footage and data consumption by around 50%.
That said, Dutra goes on to indicate that Sun does eventually plan to reduce those numbers to zero, renting out the company's processing and storage capabilities from external data centers--albeit, it would appear, at some vague future date.
...That there will be, more or less, five hyperscale, pan-global broadband computing services giants. There will be lots of regional players, of course; mostly, they will exist to meet national needs. That is, the network computing services business will look a lot like the energy business: a half-dozen global giants.
The idea is that these mega-service providers will increasingly deliver most of the world's computing in the form of a service. In other words, they'll be the back-end to "cloud computing" or "The Big Switch" (to use the term from Nick Carr's latest book.) For a company to look to a future in which it doesn't own and operate its own computers is fully consistent with this vision.
However, Sun is not just any company. It makes computer systems. And there is a very real question whether "arms merchant" is necessarily a great role to eye in a cloud computing world.
If there are hundreds or thousands of "software as a service" and "hardware as a service" companies? Sure. That's a situation not much different from today. Some independent software vendor delivering its own software in the form of a service isn't going to get into the hardware and operating system business. The investments are just too large.
However, if one accepts Papadopoulos' vision at face value--that there will be a very small number of providers--the competitive landscape looks much different. Such providers could do much of their own system engineering--as Google, in fact, does today. At the very least, a handful of mega-providers would have the sort of market power over their suppliers that probably no single company does today.
As a result, if any of the large system vendors truly believe that a highly concentrated compute utility is the future, it's unclear why they should be embracing the role of passive arms merchant given how little control they would have over their own destiny in such an environment.
As readers of this blog know, two of my interests are photography and open source, so I'm naturally particularly interested in the way the two intersect with each other. As a result, I've been doing a fair bit of reading and thinking about the Creative Commons license in the context of photos and, more broadly, how photos are best protected and shared in an online world. I don't claim to have all the answers, but I wanted to share some threads that I've been researching and pondering.
As I they state that: "In early 2008 we will be re-engaging that discussion and will be undertaking a serious study of the NonCommercial term which will result in changes to our licenses and/or explanations around them."back in November, the Noncommercial condition in some Creative Commons licenses needs to be clarified. The problem is that noncommercial, in the sense of not associated with making money, is such a vague term in an online world where Google AdSense and other forms of advertising are ubiquitous and so many Web sites and blogs represent some ambiguous intersection of the personal and professional. The Creative Commons organization apparently recognizes that there are issues. On their site,
I can't say that the guidelines in process really clear things up a lot. They seem to pay a lot of attention to US-centric technical distinctions related to what constitutes a nonprofit organization (IRS 501(c)(3)). Many very large and well-funded organizations, such as the National Rifle Association and the Sierra Club, are non-profits. On the other hand, the draft guidelines seem to suggest that some money-making uses are OK so long as it's just an "individual."
With respect to photography specifically, "Commercial" and "Noncommercial" are particularly confusing terms because commercial already has a fairly specific meaning in the context of photography. It mostly applies to photographs used for advertising and marketing purposes--as opposed to editorial or artistic uses. It's an important distinction within photography becausefrom subjects whereas other types of photographs do not.
Thus, it seems to me that a Creative Commons definition that focused more on the type of use rather than the type of user could help to clarify things. A Noncommercial license could, for example, prohibit uses that relate to marketing, advertising, and other such uses. It might also prohibit the direct resale of the photo (as, for example, stock sites do).
But, you cry, a magazine like The Economist shouldn't be able to use a Noncommercial image either--even for editorial purposes.
That's not an irrational position but I'd argue that if Noncommercial is defined to read "not associated with making money," you're effectively prohibiting the vast bulk of uses that aren't already covered under Fair Use (use in academic environment), are trivial (I make a print to hang on my wall at home), or both. Sure, you can have such a license, but why bother? Some personal blogs and MySpace pages might gain access to some photos under such a license but it's a pretty small slice of the possible uses. If you truly don't want anyone to (legally) profit from your photographs however indirectly, there's a simple option: Don't release them under Creative Commons.
Have I convinced you that the above would be a reasonable approach to a Noncommercial Creative Commons license?
If so, I hope you won't be too upset at me for burying my real lede. Because if the above is a reasonable Noncommercial CC license--and I think it is--then we don't need it. And that's actually a good thing because if you take a good look at the Creative Commons license summary page, it's clearly something that only a license geek could love and is far too complex in its Chinese menu approach to be widely understood and accepted.
Let's start with why we don't need the Noncommercial license. One justification for having a Noncommercial is that you don't want your photos used in some big advertising campaign or in a company's annual report without compensation. However, in fact, photographs licensed under Creative Commons licenses of any sort aren't a good fit for commercial photography anyway.
One problem is that they haven't cleared model and property rights Dan Heller discusses even more serious problems in this post. I'm not sure I buy into everything Dan writes, but he raises a lot of good issues that, while not limited to commercial photography, are probably most pertinent there.. The attribution requirement would be problematic for many other types of uses. (I can't imagine the typical marketing presentation that I see consistently incorporating appropriate bylines as it passes through dozens of hands and revisions.)
As for reselling photos licensed under Creative Commons? That seems far better controlled by limiting access to original high-resolution images than it does license terms.
I could also make a variety of arguments against having separate licenses that allow or prohibit changes to an artistic work.
At the risk of oversimplifying, open-source software licenses are mostly concerned with the degree to which derivative works have to be given to the commons. With rare and narrow exceptions, they don't get into who is using the software or the manner in which the code can be changed or extended. That may seem perfectly normal, but that's only because we're so used to it. One can easily imagine an open-source license that says some piece of software can only be used and modified in an academic setting. That such licenses are rare to nonexistent is a large part of why open-source software has become so commonplace.
By contrast, Creative Commons licensing offers up a complicated set of options that seem calculated to encourage people to contribute works to the commons while not pushing their envelope to allow any uses with which they're uncomfortable. While an understandable approach, it creates a system that's far too complicated and doesn't, in my opinion, have any real benefit beyond a simple license that requires attribution and which requires downstream derivatives to maintain the same license.
No one is forcing anyone to put their work into the public commons. But, once you do, you need to accept that you no longer can wholly control how it is used. The open-source software world understands this to its benefit. Now, open-content needs to do the same. The current regime is far too complex to implement and communicate.
Over on The Open Road, bought by Sun Microsystems earlier this week), JBoss (Red Hat), and Zimbra (Yahoo). He concludes that depending upon the revenue assumptions, whether you use trailing or forward-looking revenue numbers, and whether one looks at bookings rather than revenue, the valuations for all three were somewhere in the 15 to 20 times annual revenue range.for three open-source companies: MySQL (
So is this just another bubble in which companies that are considered in the forefront of the Web or open source or whatever get snatched up for unjustifiable sums?
It is true that all of these companies could be considered category leaders. It's clearly so in the case of MySQL (open-source database) and JBoss (open-source application server), so some premium might reasonably attach to their post position. Yet, one would think clear leadership would already be reflected in their revenue numbers, so that can't be the whole story. Is there any other explanation--especially one that doesn't require irrational exuberance?
I think so. As I wrote in the context of Sun's acquisition of MySQL a few days ago, it's hard for standalone, narrowly focused open-source companies to profit. A financial analyst on the Sun/MySQL call estimated that MySQL had annual revenues of $60 million to $80 million in 2007 and operated at about breakeven. Not bad, but considering that MySQL is widely regarded as one of the true open-source success stories, it's hard to view those financial results as better than modest. At issue is that even with an enterprise version and value-add services--in addition to basic support--MySQL converts a small proportion of users into paying customers. That might be OK, but even when it does monetize users, it's pretty much limited to selling them a subscription for its enterprise version--which is still a great bargain by historical proprietary database standards.
However, plug MySQL or some other open-source company into a larger organization and the opportunities increase enormously. In the case of Sun, each MySQL customer that is willing to pay for the Enterprise database is now also a potential customer for Sun professional services, servers, and other software. The same logic applies to JBoss and Zimbra with their respective owners although those paths to incremental monetization may be less clear--and, indeed, Red Hat has publicly admitted that, so far, it hasn't leveraged its JBoss acquisition as well as it might have.
Although there are any number of small and profitable independent software vendors (ISV) in the proprietary software world, small software companies get gobbled up by larger vendors all the time of course. There's often more value in integrated offerings than in point products. Enterprise are also more comfortable sourcing some types of products, such as high-level management tools, from large ISVs or system vendors.
But over and above all the reasons why it's hard to make a profit as a standalone ISV, a look at the market suggests that it's even harder in some ways for standalone open-source ISVs. It's not that their product is any less valuable and it's certainly not less desired. But it's hard to monetize in a standalone way.
That could well be a reason for these high valuations. The value is already there but it takes a larger and more diverse organization to supply the leverage that makes money off that value.
A couple weeks ago, the Linux Foundation released a podcast interview with Linus Torvalds that, among other topics, touched on if or when Linux would "upgrade" from the GPLv2 license to the new GPLv3 version that was approved last year after much acrimony. Allison Randal's summary over at O'Reilly Radar seems about right: "In the end, what we have is a stable system by reason of inertia. It may eventually shift, but not anytime soon."
One major reason is that Linus just doesn't see any compelling reason to make the shift--which is to say that he doesn't see any particular advantage to Linux at this time. That said, he's indicated the relicensing of OpenSolaris under GPLv3 could persuade him to make such a move. (Sun Microsystems floated such a move as sort of a trial balloon but it was shelved--at least for a bit--because of vocal objections from the OpenSolaris community.) In other words, if shifting Linux to GPLv3 would give Linux access to code that it wouldn't otherwise be able to use, that could be interesting. On GPLv3--as on many other matters--Linus is a rather pragmatic individual. Truth be told, far more pragmatic than many of the other highly visible personalities around open source.
Thus, don't expect Linus to push Linux toward the GPLv3 for philosophical reasons. Indeed, he's objected loudly to the GPLv3--although his more recent rhetoric has calmed--precisely because it essentially makes philosophical judgements about allowable uses (especially Digital Rights Management) that have nothing directly to do with code.
However, in the wake of this interview, a meme was also making the rounds that it would be difficult from a practical and legal perspective to move Linux over to the GPLv3. At issue is that Linux (as in the Linux kernel) is licensed under GPLv2. Some GPL’d software is licensed under the terms “GPLv2 or later,” but not Linux itself. Furthermore, contributors to Linux do not assign their copyrights to some other controlling entity--as do, for example, contributors to the Free Software Foundation’s GNU project. Thus, the logic goes, relicensing Linux under GPLv3 would require getting agreement from hundreds of contributors or more--and, perhaps, even having to rewrite code submitted by people who don't agree to the shift or who couldn't be contacted.
I'm not a lawyer and have no legal opinion on this, but I wanted to point out that Eben Moglen discussed this situation at the Red Hat Summit last May. While certainly not a definitive opinion, as the former general counsel for the Free Software Foundation that created the GPL, Eben's voice surely carries some weight. As I noted in a piece I wrote at the time:
From Eben’s perspective, “My guess is that Linux is a collective work…as evidenced by a decade of LKML [the Linux Kernel Mailing List] discussions. That’s my guess.” So, while it remains very much an open question whether Linus (and the other lead kernel developers) would want to make the move to GPLv3, it’s unclear that there are any fundamental roadblocks (such as having to get explicit agreement from every person and organization who ever contributed to Linux) should he choose to do so.
To be sure, the answers to legal questions are often ambiguous; as Eben also noted, there are alternative theories that could play here as well. However, if Linus and the other key kernel developers were to back a shift to GPLv3, and if there were reasonable legal air cover from respected Open Source authorities for doing so, it seems unlikely that we'd see a substantive challenge to such a move. Oh sure, there would be loud hand-wringing over at Debian and in other forums--this is open source after all. But, as a practical matter, I'd expect any "controversy" to blow through pretty quickly.
So the question arises, why don't artists serialize the release of songs ? Why not create a "season" of release of songs, much like the fall TV season and promise fans that Flo Rida is going to release a new single every week or 2 weeks for the next 10 weeks ?
Whenever discussions like this arise, there's always the school of thought holding that most albums only have one or two decent songs anyway. This theme is presumably a close cousin to "all current music is crap" (i.e., they just don't make music like when I was a kid).
However, there's another school of thought. As this comment notes: "Currently, those who only purchase individual songs, rather than entire albums, are missing many lesser known gems, and are missing the cohesive experience of an entire album."
We can come up with examples where this is clearly the case. Pink Floyd's The Wall, The Who's Tommy, and so forth. However, it seems a stretch to call the vast majority of albums out there as being particularly cohesive. In fact, to the degree that there's excessive sameness within a single album I tend to see that as a bug rather than a feature.
It's worth noting that the album is far more a creation of technology and custom than of art. Columbia produced the first 12-inch, 33 1/3 RPM vinyl "long playing" record in 1948. (According to Wikipedia, the term "album" relates to the fact that the relatively short 78 RPM records that preceded LPs were kept in a book "album.") Although 45 RPM singles (in particular) were popular during the 1950s and early 1960s--such singles generally had a "hit" on the A-side and a less popular song on the B-side--LPs continued to define a great deal about how music was released. Even cassettes and CDs didn't change things much as these new formats adopted about the same capacity as the LP. As Kees Immink wrote in the Journal of the AES:
The disk diameter is a very basic parameter, because it relates to playing time. All parameters then have to be traded off to optimise playing time and reliability. The decision was made by the top brass of Philips. 'Compact Cassette was a great success', they said, 'we don't think CD should be much larger'. As it was, we made CD 0.5 cm larger yielding 12 cm. (There were all sorts of stories about it having something to do with the length of Beethoven's 9th Symphony and so on, but you should not believe them.)
In other words, whenever the industry has come up with a new format it has almost always stuck with roughly the same playing length.
There are many lessons here for IT and other businesses. For one thing, there's backward compatibility. The industry wanted to reissue LPs onto cassettes and CDs without having to routinely use multiple of them for a single album. In practice, you rarely get to start with a clean slate. The digital realm finally banishes the physical aspect of backward compatibility. No longer is there any technical reason to favor selling any particular size of song bundle.
However, there are more subtle types of inertia. Whole sets of practices from booking studio time, to promotion, to going on tour have grown up around the chunk of music that is the album. On the other hand, the nature of digital distribution--and the flat-pricing scheme that Apple has fought for successfully (even though it doesn't really make economic sense)--tend to drive us towards hits-driven downloads, Long Tail notwithstanding.
I don't know if a scheme like Mark's would work. However, it's increasingly hard to see a traditional album format making sense in a world where it's got no physical reason to exist. If we move away from albums, perhaps we have to recreate "B-sides" or other mechanisms that encourage the sort or serendipitous discovery that the album has brought us over the years.
As technology observers, it often seems most natural to view the strengths or weaknesses of some online service through an infrastructure lens. For example, the virtualization layer underlying Amazon's EC2 very much shapes the nature of the offering. On the one hand, virtual appliances of a sort let you quickly fire up a virtual machine (VM) instance. At the same time, VMs are, in a sense, ephemeral--which has implication for the way you store data permanently within the Amazon framework.
Other examples simply involve trading off service levels against costs. Want double-redundancy? You get what you pay for.
However, some of the recent changes at eBay are a reminder that optimizing for a particular type of customer and, ultimately, for a particular business model is also about the rules imposed on top of the infrastructure. To be sure, some rule choices are shaped by technology and the realities of a World Wide Web.
However, other rules are just choices. And, in the case of eBay, even fine tweaks of the governing rules and procedures can hugely affect all sorts of dynamics between buyers and sellers--and thereby how attractive a venue is to buy and sell in general or even just to buy or sell items with a particular set of characteristics or in a particular way. For example, standard eBay auctions--in which eBay's computer will "proxy bid" up to a pre-determined maximum--look like a sealed-bid, second-price Vickrey auction. This has all manner of implications for buyer and seller behavior--as well as for the ways in which the system is potentially exposed to gaming in various ways. (In The Undercover Economist, Tim Harford takes a look at how changing auction formats made a huge difference in selling wireless spectrum frequencies.)
eBay's recent feedback and pricing changes are another example. Pricing changes explicitly favor large sellers. In addition, it's now changed the longstanding (if only middling effective) mutual feedback system so that sellers can't leave neutral or negative feedback. Instead, eBay claims that it will pump up enforcement against non-paying buyers. In other words, the feedback system now looks a lot more like a conventional e-Commerce system. After all, buyers get to rate online merchants but Buy.com doesn't go around rating its buyers; it just wants its money. However, the new setup is probably not as attractive for a small-time seller who might want to be more selective about the buyers they sell to instead of just playing the odds like any store of significant size does.
eBay provides lots of game theory data points with respect to, not only auction theory, but all sorts of buyer and seller behavior. However, beyond the specific, it also provides plenty of evidence that relatively small changes in the ground rules can create outsize consequences in the way that communities interact and operate--for better or for ill.
That's worth remembering if your business or organization depends on community--and oh so many do today.
Writing in Computerworld, Eric Lai notes that:
Despite the popularity of .Net within companies and other employers, Microsoft has seen its standing among students continue to be eroded by a combination of open-source programming tools and Adobe Systems Inc.'s Web design software. Now, after years of using half-measures to try to beat those technologies on college campuses, Microsoft is taking a bolder step by making four pillars of the .Net platform available free of charge to tens of millions of students in the U.S., Canada, China and eight European countries.
A few observations here.
If you're a follower of Sun and Solaris as I have been for many years, there's a familiar thread here. A major cause of Sun's financial problems--which the company is still working to put behind it--was that it lost a goodly chunk of the core developer constituency that gave it market relevance. And, in CEO Jonathan Schwartz' words: "To establish a high-integrity relationship with a broad and participative community is really the principal objective of bringing Solaris into the open-source world." Yes, there are still many developers for Windows and other Microsoft software platforms, but it often seems a dutiful and passionless crowd.
The analogy between Microsoft/Windows and Sun/Solaris is not a perfect one. Perhaps the most notable distinction is that Microsoft has a broad presence in both consumer and SMB markets that Sun did not (and does not). The inertia this provides insulates Microsoft to at least some degree from the shifting breezes of developer fashion. Nonetheless, when you add in that Microsoft also has to contend with the shift of computing into the network cloud (and the corresponding diminution of Microsoft's incumbent advantages that implies), a weakening connection to developers can't be viewed as anything but bad.
Finally, while giving away software may well be a reasonable step for Microsoft to take, it's hardly a sufficient strategy to counter the rise in Open Source (and programming to application programming interfaces in a Web 2.0 or Software as a Service context). Yes, students like free. Who doesn't? But they also want to work on projects that they consider interesting, relevant, and--yes--cool. And that has very little to do with Microsoft making products available under an academic license.
We put stuff into computers (and, for that matter, get stuff out) in pretty much the same way we have for a good couple of decades.
Of course, we still use keyboards of a fairly standard design as our primary mechanism to feed words into a computer and mice are well-ensconced as the navigational tool of choice. Over in the gaming world, it's the familiar two-handed game controller that predominates. In fact, I sense that one sees fewer joysticks, steering wheels, various oddball keyboards, and trackballs than one saw in the past. This probably reflects that "productivity" PCs are shifting toward notebooks on the one hand and that gaming is moving toward consoles on the other.
The one clear counter-example is the emergence of "thumbing" (as opposed to typing). But this is really more about making compromises in service of the form factor of handheld devices than it is a genuine innovation--however commonplace it has become.
However, we may be starting to see some genuine change.
The motion-sensing Nintendo Wii remote isn't a particularly new concept. We've see academic work in data gloves of various types going back to the 1990s. What's different is that the Wii is mass market. Volumes mean not only lower cost, but an incentive for software makers to write games and other applications that support and use the device in interesting ways. Because it corresponds to the physical world, hand movement seems a natural fit with many tasks and manipulations. As a result, I expect that we'll see descendants of the Wii in increasingly widespread use.
Another big trend we're seeing is multitouch. As CNET News.com's Tom Krazit notes, it's Apple that has pushed this technology into the mainstream--starting with the iPhone in the handheld arena and the MacBook Air in the notebook space. (On the notebook, it's the touchpad rather than the whole screen that is multitouch and it's less of a big deal as a result.) I've been arguing for a while that being able to draw a "napkin drawing" or a "whiteboard sketch" is one of the things that's largely missing today when we work and collaborate remotely. The combination of multitouch and writeable LCDs at affordable price points, and supported by software, would be a genuine step forward.
These aren't the only possibilities. Six-degrees-of-freedom controllers have long been used in 3D engineering programs but they've been priced for the CAD professional. Logitech has come out with the affordable (about $55) 3Dconnexion SpaceNavigator PE (Personal Edition) 3D Navigation Device version that makes a great Google Earth companion. If 3D virtual worlds ever take off in a big way, devices such as these would be a natural and obvious fit.
Then there's always voice recognition. It's getting better. But that could be a statement for just about any year. And general-purpose voice recognition remains a niche. You won't catch me betting on it (although I suspect its time will come--someday).
When photo site SmugMug initially contacted me, it was in the context of some of the pieces that I had writtenand .
In a nutshell, relative to Flickr, SmugMug has opted for less of a open-community orientation than for ways to store and display photos with a rather granular set of access controls. (See some discussion by CEO and "Chief Geek" Don MacAskill.)
These are important topics that I'll be discussing further in due course, but today, I'm going to focus on SmugMug's physical infrastructure.
During my conversation last week with President Chris MacAskill, he made some points about using Amazon.com's Simple Storage Service (S3) that may not be widely appreciated. (S3 is Amazon's "storage as a service" offering that users pay for based on the amount of storage space used and data transferred.
Like Amazon's EC2 compute service, it falls roughly into the "Hardware-as-a-Service" concept.)
SmugMug was one of the earliest S3 users. As Chris tells the story, SmugMug was buying a "mindblowing" number of Xserves from Apple. The Silicon Valley-based company was running out of power and space--the usual story.
However, Chris raised another point that bears mention. The company was having to buy all this gear up-front, in advance of the revenues (i.e. user subscriptions) that it would hopefully generate. This was difficult from a cash flow perspective--especially for a company that wasn't venture capital-backed. But the reality is actually worse.
Not only were the expenses up-front, but they were capital expenses. From an accounting perspective, this means that the depreciation on the systems hit the P&L in a given year. The result? You may look profitable, but cash flow is tight and you could end end up effectively "prepaying" taxes.
Then Amazon called out of the blue, after a conference, and told the site about S3. At Amazon's initial target of 50 cents per gigabyte, it was intriguing. When Amazon ended up pricing its offer at 15 cents, Chris says the company's "jaws dropped."
Initially, SmugMug used Amazon S3 for backup while keeping all of its primary storage in-house. At the beginning, it wasn't thrilled with uptime, but it said that it wasn't disappointed, either. More troubling was that Amazon wasn't so transparent about the big issue., which seems to remain a
However, over time, SmugMug started seeing better uptime from Amazon than it could deliver in-house. It now has more than 400 terabytes of photo and video storage on S3, and it can add as much as 1TB on busy days.
Now that the company has switched much of its primary storage to S3 as well, there's another economic point worth making. Were SmugMug to host all this storage in-house, it'd actually have to buy more like 1.2 petabytes because it'd need enough to support any growth spurts and enough for backup, as well as primary storage.
With Amazon S3, the company effectively gets backup for "free." (Of course, that assumes that you trust Amazon not to lose data, but as far as I know, there has been no data loss associated with any Amazon outages.)
SmugMug is also a heavy user of Amazon's Elastic Compute Cloud (EC2), even though the service is still in beta test mode. One of the most appealing features of EC2, according to Chris, is that it can handle load spikes without paying for the capacity all the time. For example, loads go way up after a three-day holiday weekend, when people upload all their pictures on Tuesday.
All that said, the company does maintain some of its own servers. It does this, in part, to provide a sort of cache for "hot" photos. (Chris estimates that 10 percent of the photos on the site get 90 percent of the traffic.) Related is the fact that SmugMug runs its MySQL database servers in-house (so it'll be physically close to the hot photos.)
I suspect that we'll see these hybrid architectures--even at aggressive Cloud Computing adopters--a lot. You sometimes need that little bit of customization or specialization that you can't get from a service that has to be relatively standardized. That said, SmugMug is an aggressive adopter, and it gives us some good insights into what can be gained by making the infrastructure largely someone else's problem.
Radio frequency identification, a technology that allows identification of objects using radio waves, hasn't exactly been a failure. The Wikipedia article on RFID lists all manner of examples of RFID use, ranging from the whimsical to the more substantive. And early proponents of RFID, such as Wal-Mart and the U.S. Department of Defense, have moved ahead with large-scale RFID deployments affecting both themselves and their suppliers--albeit at a slower pace and in a more limited way than originally envisioned.
Still, if you contrast the selective use of RFID to the ubiquity of barcodes, the contrast is striking. It's arguably just a normal technology adoption curve--"valley of despair" and all that--but that doesn't make it any less disappointing for its proponents. In general, at least from the supply chain angle, RFID is so far mostly focused on goods that are either high-value individually (such as parts for Boeing's 787 Dreamliner) or in aggregate (such as full pallets of less expensive items).
Thus it was with both interest and some amusement that I discovered Alta in Utah (where I'm skiing this week) now using RFID for its lift tickets, replacing the familiar sticky paper and metal "wicket" that are still the most familiar form of ticket to most skiers. You put this plastic RFID card in a jacket pocket (preferably away from credit cards and electronics) and a little gate swings open at the lift if you have a valid, paid ticket.
It's a nifty system. It's "hands-off," so there's no need to stick a card with a magnetic strip into a reader--a fairly common system at a variety of ski areas. They've also developed a system with a swing-out gate rather than an annoying turnstile. Furthermore, the card can be refilled online and can easily accommodate pricing schemes such as multi-day discounts within a given time period and the like. (Although the current scheme is fairly bare-bones.)
So why amusement? Well, this is perhaps one of the unlikeliest of ski areas to implement such a relatively cutting-edge technology. (Its use at a variety of ski areas mostly in France and Ski Dubai notwithstanding, it's still uncommon.) Because Alta is...Alta.
This is, after all, one of three ski areas in the U.S. that still doesn't allow snowboarding. The lodge where I'm staying was originally constructed by the WPA. The wife of a Dartmouth friend of mine describes an Alta ski vacation as something akin to "boot camp." It doesn't require quite as much traversing (aka climbing) to get from lift to lift as it did in past years, and the Alta Ski Lifts Company has upgraded some lifts here and there. Still, it's perhaps seen less change than any other American ski resort of comparable stature in the past decades.
On the one hand, this sort of change reflects just how accessible computer technology has become. It almost goes without saying (although a couple of longtime lodge guests were a little bit surprised) that I'm sitting here typing this via a Wi-Fi connection. However, it's also a reminder that change--even when generally positive--can have its downsides as well, even if they're small. As this article about the new Alta Cards notes: (See the article for a picture of the old ticket.)
While much will be gained in the way of comforts and convenience, with the phasing out of the conventional passes, Altaholics will unfortunately have to say goodbye to one of the mountain's richer traditions: the personalized messages printed below that classic Alta-red banner on the tickets, denoting various "special days" celebrated at Alta.
"We're going to feel a sense of loss and change, not only those within the company, but our guests, too," (Connie Marshall, Alta's director of sales and public relations) says. "A vestige of personalization at Alta, people would even call ahead to request this service."
With the ProLiant DL785 G5 Server, Hewlett-Packard has re-entered the 8-socket x86 server space. This system has twice the computing headroom of the quad-processor servers that are generally considered at the top end of the volume or so-called commodity server space.
HP isn't new to this market segment. In 1997, Intel bought a company by the name of Corollary that was in the process of developing a chipset that effectively "glued together" two standard quad-processor x86 busses into a single 8-way symmetrical multiprocessor (SMP). Intel not only completed development, it also gave the chipset legitimacy by giving it an Intel blaze. Then Microsoft provided the last major missing piece with Windows 2000, an OS that not only showed real progress in reliability and scalability over its predecessors, but also lent credibility to Microsoft's efforts to be perceived as a serious OS vendor for serious servers.
ProLiant, initially as a Compaq server brand and then after its acquisition by HP, used this chipset and its successors for a succession of server products--even after Intel decided to stop contributing to further development. (Intel had, at various points, planned to do a Xeon version of Itanium's 870 chipset, but this never ended up happening.) Compaq's own version, the "ProLiant F8" chipset, adapted Profusion for the architecture and bus speeds associated with newer Intel processors, but did not fundamentally alter the design. (Subscribers can read about more of the historical background here.)
However, HP eventually decided to pull the plug on in-house development of 8-way chipsets for Xeon. I've broached the question "Why?" with HP executives on a number of occasions over the past few years and their responses have been pretty consistent. They've boiled down to two basic rationales:
So what's changed to bring ProLiant back into this space? From my perspective, there's probably not one single reason but rather a few different factors that collectively served the needle from "No" to "Yes."
It's easier. Rather than using Intel processors, the ProLiant DL785 G5 uses Advanced Micro Devices Opteron "Barcelona" quad-core processors. Unlike Xeons, the AMD processors can support up to 8-socket servers without the use of special server vendor-developed chips. A lot of effort (and therefore money) still goes into designing, qualifying, and supporting a system in this class. However, the costs associated with primarily integrating existing in-house and third-party components and technologies are still much less than if bespoke chipset design is added to the mix.
The market is larger. Dual-socket servers still make up the bulk of server unit sales. However, server virtualization, in particular, has kicked demand for larger boxes, which once seemed to be on an inevitable slide, up a notch. Server virtualization allows as many workloads (more or less) to run on a system as processor, memory, and I/O capacity can support. Given this, many users are starting to think that they're better off consolidating onto larger servers than smaller ones. This reduces the number of physical boxes to manage. In addition, larger servers often come with a more sophisticated array of reliability and management features. The market for scale-up x86 servers isn't going away either--for reasons including the increasing sophistication of Microsoft SQL Server or the growth of Solaris on x86.
Integrity is only a partial solution. From HP's perspective, the "buy Itanium" message was always logical enough. Most of the critical high-end Windows applications were available and Integrity, after all, was specifically optimized for that space. It's a good story, but the reality is that a lot of Windows customers don't want to support multiple processor architectures in their environments--even if the software is (mostly) the same.
As a final point, the HP of today is a tightly managed and highly measured organization. And ProLiant is clearly one of the growth stars. Thus, it's not hard to imagine that politely leaving high-end Windows opportunities to Integrity came to be regarded as sub-optimal from the perspective of HP as a whole.
Whatever the precise balance of reasons, HP is back in the 8-socket Xeon game. It's a space that HP has largely ceded to IBM's X4 designs. Now HP is re-engaging aggressively as they did with blades and as they've done across so much of the x86 space.
I've been on a bit of a de-cluttering jag over the past year or so. Too much paper, too much "stuff" around the house. So I've been slowly dumping the junk and selling or donating the rest.
This includes photographs. I had stacks of snapshots of family, friends, places, and so forth sitting around in various drawers and boxes. I had made a half-hearted effort to digitize some of the old slides previously, but scanning is really tedious work. Scanning the hundreds of photos involved here was just more than I realistically felt like tackling.
Over the past couple of years, I'd had some slide scans done locally by a small photo store and a large one. I wasn't impressed in either case. I paid about $1 per scan and the results were pretty mediocre. I don't doubt that I could have eventually tracked down someone in the Boston area who could do a better job for a reasonable cost, but we're still talking pretty big bucks for a mass scan-athon.
I recently received the results. Bottom line? Good quality and, at $0.24 per slide and $0.27 per print, the price is hard to beat.
I'll dig into my experience in a bit more detail, but let's get one thing out of the way first. The reason the prices can be so good is that the Burlingame, CA-based company does the scans at its facility in Bangalore, India. The way it works is that you ship your box of photos to Burlingame (you print out a UPS label when you place your order online), where they are batched up in a palletized air freight container and shipped to India.
Unsurprisingly, the "ship to India" part causes some intake of breath in a lot of people. However, having gone through the process and thought about it some, I think the incremental risk is pretty small. If you have a handful of photos that you would be especially heartbroken to lose, it's perfectly understandable that you might not want to trust them to a shipping company at all to send them out and get them returned. But once stuff is being shipped around anyway, the international air transport step wouldn't seem to make a big difference. In fact, given that ScanCafe is understandably sensitive to this issue, they seem to have put particular thought into both the whole logistics process and its transparency to customers.
(For what it's worth, for many years I've had slide film processed by Kodak using prepaid mailers. I had one batch of several rolls lost; I'm pretty sure it was the local Postal Service's fault on the return leg. And a couple of years ago, Kodak made such a hash of closing down its Fair Lawn processing facility that I had film missing for months. In other words, staying domestic is no guarantee.)
With that out of the way, how about the rest of the experience?
Quality. I ordered basic 3000 dpi JPEG scans of my slides and 600 dpi scans of my prints. For $0.09 more you can get higher-resolution TIFF scans. I didn't bother given that these are really "memory shots" and I'm not planning on making big prints. The overall quality was quite good. Many of the photos were old. Slides dating back to about 1960 were faded and dirty in many cases. I found the corrected color balance to be spot-on, and the general cleanup to be well done--especially for the price.
A minor caveat is that the JPEG files are relatively highly processed. This means that they look pretty good "out of the box" with relatively high saturation and dark blacks and bright whites. That's great if you want to look at and share the photos more or less as-is. It's not so ideal if you want to process them further yourself. (For $0.14 per slide you can get a TIFF scan with no processing plus a fully-processed JPEG.)
Turnaround time. This was not a particularly speedy process. In fact, it took close to three months door-to-door. As I noted earlier, there's a nice portal that lets you see where your order is, so I wasn't concerned or anything; I just wanted my scans. However, the company has recently significantly expanded their scanning facility (now 20,000 square feet) with the goal of getting turnaround down--although the nature of their operation means that's it's never going to be an especially fast option.
Customer service. I had one slight billing problem (my 8"x10" prints were charged at the rate for larger pieces of paper). I received a prompt reply to my e-mail to customer service, and the matter was resolved within a day.
Pricing. This is one of the real strengths of the service so long as you're sticking to "standard" media. This includes 35mm color negatives, 35mm color slides, and paper photos up to 8"x10". They'll do other types of scans (such as newspaper clippings and black & white negatives), but those are $0.99 each. I note that they've actually increased the price for newspaper clipping/letter/paper artwork from $0.37; they've obviously decided to focus on a specific set of high-volume media types. You get to review the scans before they're shipped back to you. You can delete up to 50 percent and not pay for them. In practice, this is probably most useful if you're having negative strips done, given that you can't specify that only specific frames be scanned. (Standard color negative scans cost $.19 per frame.)
Overall, I give ScanCafe high marks. I combined the photos on the DVD I received with other scans and digital images and was able to give my brother a nice selection of family photos. Who knows when I would have gotten to it were I doing the scans on my own?
Over the weekend, I enjoyed reading a New York Times article by Randall Stross titled "The Computer Industry Comes With Built-in Term Limits." It focuses on Microsoft and Google and how:
two successive Microsoft chief executives have long tried, and failed, to refute what we might call the Single-Era Conjecture, the invisible law that makes it impossible for a company in the computer business to enjoy pre-eminence that spans two technological eras. Good luck to Steven A. Ballmer, the company's chief executive since 2000, as he tries to sustain in the Internet era what his company had attained in the personal computing era.
This observation that companies dominant in one phase of a market rarely enjoy the same success through major transitions is hardly unique to the computer industry.
One common explanation is offered by Theodore Levitt's famous 1960 Harvard Business Review article, "Marketing Myopia," which popularized the idea that companies should define themselves in terms of markets and customer needs, rather than products. A common marketing class illustration is how the railroads thought of themselves as running trains rather than providing transportation--with the result that they were marginalized in many respects as transportation technology changed.
There's doubtless a lot of truth to this contention, but, as I discussed in the context of the photo business previously, shifting an entire product foundation is enormously challenging and past skill sets and ecosystem don't necessarily travel well from one generation to another. In the earlier transportation example, what particular expertise or competitive example would Penn Central have brought to running an airline? Very little.
In the case of Microsoft, the technology gap is perhaps less yawning between the type of software on which it made its fortune and that which is widely consumed over the network today. (That said, there are many differences in development model, adoption process, community building, and so forth.)
However, I don't see the issues faced by Microsoft as so much about marketing myopia. As the article notes:
In a 1995 internal memo, "The Internet Tidal Wave," Mr. Gates alerted company employees to the Internet’s potential to be a disruptive force. This was two years before Clayton M. Christensen, the Harvard Business School professor, published "The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail" (1997). The professor presented what would become a widely noted framework to explain how seemingly well-managed companies could do most everything to prepare for the arrival of disruptive new technology but still lose market leadership.
Thus, the meme that Microsoft is "dead" (in theory) is based less on an argument that Microsoft is blind to what's going on with network computing than on the observation that it hasn't really effected any major changes in response.
There's a reason for this. It's easy, if only relatively so, to spot major transitions. (Although, to be sure, harder to spot them before they're obvious to everyone and harder still to discern their precise impact and timing.)
But it tends to be really, really hard for cultural and organizational reasons to do what needs to be done about them. And, perhaps even harder and at times impossible, to make the necessary business changes.
I call it the "tyranny of the installed base." I saw plenty of it when I worked at minicomputer Data General in the 1990s. Customers want bug fixes and enhancements to their existing products--even if it's some legacy database that fewer and fewer people used with each passing year. The result is that lots of resources get sucked into supporting the "old stuff," leaving that much less energy, money, etc. for the "new stuff."
But the real issue here is more insidious. A company, especially a public company, can't really "Just Say No" to that installed base and tell them to take their business elsewhere. Imagine if you would this scenario: Ballmer wakes up next Monday morning after having an epiphany over the weekend. He walks into Redmond, tosses a few chairs for emphasis, and announces that Microsoft is going to immediately discontinue selling and developing its Windows operating system and Office products because they're mired in the past and have become too much a distraction from what's really important--its online services business.
I think we know what comes next. Microsoft's stock price falls through the floor and Microsoft's board of directors send the men in the white coats to take Mr. Ballmer somewhere he can get some extended rest. While such a scenario would doubtless cause considerable delight in some quarters, I think most of us can agree it's neither practical nor a particularly good idea.
It's hugely challenging to jump from one wave to the next even when you see it coming with perfect clarity. The next wave may even be bigger in terms of customers, revenues, and everything else. But there's a trough in between.
I'm just wrapping up at the Microsoft STB (Server and Tools Business) Analyst Summit at TechEd down in hot and stormy Orlando this week. It was a generally good event--always good to get an overall strategy pulse and spend a fair bit of time chatting one-on-one--if a bit redundant with the other two Microsoft events I've attended in the recent past, the Microsoft Management Summit and MIX08.
Unsurprisingly, one of the areas Microsoft hit on hard at this event was virtualization--especially its upcoming Hyper-V hypervisor for Windows Server 2008. However, also on display were its various other virtualization flavors, including the application virtualization that came from its Softricity acquisition. (I'll be delving into Microsoft's virtualization strategy in depth in an upcoming report, complementing recent reports on the corresponding strategies at Citrix and VMware.)
Today, though, I'm going to keep things at a higher level. Coming out of this Microsoft event and the others I've attended, I have two broad observations about the company's strategy--each of which represents a considerable strength and opportunity. But, simultaneously, also holds within them key challenges for Microsoft going forward. Let's take the points one at a time.
Better together but more monolithic.
The first observation is that Microsoft does a mostly admirable job of portfolio-level design that results in a suite of products that integrate with and cross-support each other. SharePoint works closely with Exchange and other Microsoft products. Visual Studio and Expression Studio present two different views of the same code for developers and designers respectively. Microsoft technologies such as Active Directory permeate its products. No company's suite of software truly works together "seamlessly"--no matter what the marketing literature says--but Microsoft comes closer together than anyone with comparable breadth.
The flip side of this togetherness, though, is that Microsoft products then tend to play less well with the other children than is the norm. Microsoft has improved in this regard in recent years (how could it not?), but any number of proprietary protocols and undocumented interfaces make plugging and playing with a Microsoft environment far more difficult than with products that use more standardized approaches.
Embrace the cloud--as a means to extend licensed software.
Microsoft claims to embrace software in the cloud. It points to the massive amount of code pushed out by Windows Update, to online platforms like Virtual Earth, to Xbox LIVE, to 50,000 Microsoft servers returning search results. In absolute terms, Microsoft is already doing a great deal with cloud computing in its various forms. Indeed, based solely on the quantity of bits that it's pushing around for network-based computing, there's an argument to be made that Microsoft is already a big player in this space. At the least, if one listens to speeches by Ray Ozzie and other Microsoft execs, it's clear that Microsoft is well-tuned into the notion that more and more software is going to be delivered from the network.
At the same time, Microsoft is also clearly determined to approach cloud computing in a largely non-disruptive way. Most of Microsoft's cloud-related products and initiatives are adjuncts and complements to traditional licensed software, not a replacement for it. Thus, we have teaming features for Microsoft Office, online extensions for Exchange, and hosted offerings for a number of their standard licensed products. Thus, Microsoft is primarily focused on either increasing the value of their licensed software using the network or in simply offering alternative ways to consume the same bits. It's not nearly as interested in radically changing the economics or use models associated with its existing, and very lucrative, business.
Which shouldn't surprise anyone.
Sorely tempted as I was to do otherwise, I sat on my keyboarding fingers while the VMware saga unfolded yesterday--or at least I limited myself to posting some initial thoughts via Twitter. I know a lot of the personalities involved--I first met ousted VMware CEO Diane Greene in 2000--but I didn't feel I knew enough to discuss what happened in detail. Now, the day after, I don't claim to know exactly what happened within EMC's walls, but I've heard and surmised enough to feel comfortable offering some thoughts.
This wasn't predominantly about financial results. A lot of financial commentators have focused on a 2008 revenue forecast that is "modestly below" previous guidance and a stock price well below once high-flying levels. Come on. The company was still targeting about 50 percent year-over-year revenue growth. It's hardly Diane's fault that investors bid VMware stock up to unsustainable levels. Sure, at some level, financials played into this whole debacle, but only indirectly, insofar as stronger revenue forecasts or a higher market cap may have given Diane a stronger bargaining position.
Nor was this about strategy or execution failure on the part of VMware. One story puts it that: "A carefully considered opinion is that the EMC board doesn't believe Greene is the person who can take VMware to the so-called 'next level.'" Carefully considered perhaps. But I profoundly disagree. If anything, VMware has taken the lead in articulating the value of a software ecosystem that leverages a virtualized foundation but goes way beyond basic hypervisor functions. Perhaps VMware didn't badge it with some vaguely cool-sounding but largely content-free name; they just went with "Virtual Infrastructure." But that's not even a bad thing in a market where many IT shops equate VMware with virtualization. Microsoft has an airy-fairy feel good slogan ("Dynamic IT"), but in looking at all of the process, lifecycle, and other virtues dynamism could lead to, VMware is in many ways uniquely working them, while just about everyone else is mostly talking about them.
How about Microsoft? I've also heard suggestions that this somehow happened because Microsoft was starting to get its virtualization efforts on track. This is supposed to be a surprise? And what was Diane supposed to have done about this, rather than continuing to take virtualization to higher levels and into more functions as VMware has done? Drop a nuke on Redmond? VMware has actually played its hand against Microsoft quite well, leveraging a first-mover advantage rather than milking a lead.
So what's left? All the evidence suggests that Diane's ouster revolved around to what degree VMware would remain independent of EMC. As this Fortune article from last year indicates, this wasn't a new source of friction between VMware and its EMC parent. Writer Adam Lashinsky presciently notes:
The biggest headache just might be EMC. The two companies continue to have as little to do with each other as possible. Greene and her acolytes butt heads frequently with EMC's senior executives, who remain annoyed they cannot benefit more directly from owning VMware by selling its software.
VMware's position is unique among EMC acquisitions. EMC standard operating procedure is to aggressively integrate the companies it acquires, moving employees around and frequently putting EMCers in charge. Not so VMware. After some initial signs that some decision-making and various business processes were being pulled into EMC's Hopkinton headquarters, VMware went right back to operating quite independently out of Palo Alto. VMware even built its own snazzy new eco-friendly headquarters last year.
I have little doubt that this conflict was at least partly personality-driven from both sides. However, our own discussions with key VMware storage, system, and software partners back up Diane's long-held contention that an arms-length relationship between VMware and EMC after the acquisition was absolutely essential to maintaining goodwill and cooperation with those other partners.
Adding to this long-standing conflict was the question of whether VMware would completely spin itself out of EMC. (The earlier IPO was for only about ten percent of the company.) From Steve Lohr of The New York Times:
Ms. Greene was fired after she refused to resign or take another position at VMware, according to a VMware manager who asked not to be named because he was not authorized to speak publicly. The point of conflict, the person said, was that Ms. Greene had been pushing hard for VMware to be spun off early next year. After five years of ownership, a subsidiary can be sold off in an essentially tax-free transaction. EMC bought VMware for $635 million in cash in December 2003.
So, the bottom line seems to be that Diane finally pushed too hard for VMware independence with an EMC organization that, in no small part, has been longing to pull it in more closely. Our understanding is that EMC CEO Joe Tucci was one of the people who actually supported a degree of independence for VMware in the past. However, whatever the individual opinions of EMC execs and its largely-insider board, the deed got done.
And that's very unfortunate for EMC and VMware. I don't really buy Ashlee Vance's calling out of Joe Tucci as the personification of all things EMC in this characteristically snarky Register story, but I reluctantly concur with the rest of his analysis.
"Real world" examples of some trend or business model are great. Theory is fine up to a point but eventually it's awfully nice to connect up with a concrete example that gives the theory some real cred.
At the same time, examples can mislead us. Often they turn out to be anomalies. Maybe a company is some sort of historical quirk, a product of a very specific time and place. Or maybe some technology approach is valid enough--but only for a very narrow set of needs. One warning sign is seeing the same tired examples trotted out for every discussion, every news article, and every conference.
I see some of that in all the following cases. I certainly won't go so far as to say that the underlying trends or business models are illusory. But I do think they're more limited or further away than their most overenthusiastic proponents suggest.
The Long Tail, as popularized by Wired's Chris Anderson is a hot meme of the blogging and Web 2.0 crowd. Simply put, the Long Tail states that bestsellers aren't in the majority when you tally up the sales at Amazon or Netflix. Rather it's the total of the far more numerous other 80 or 90 percent of content. From a business perspective, the significance is that there's money to be made selling what's in the long tail.
However, the number of true long tail businesses gets thin outside of aggregators of digital media--the companies who have minimal costs to acquire, inventory, and sell incremental low-volume products. Amazon, in particular, is a highly atypical, if not unique, retailer in terms of scale. In fact, we're starting to see a body of evidence that suggests that the long tail is, if not necessarily wrongheaded exactly, more limited in applicability and degree than some of its proponents have suggested.
We've also seen pure Open Source much touted as a viable business model. By "pure," I mean a model that doesn't hold any software back for paying customers only. The hope is that enough users will elect to pay for support and other services to cover a company's cost and profit. Red Hat, a profitable and growing company, is the poster child here.
But Red Hat is exceptional really. It's emerged as the unquestioned leader among enterprise Linux distributions, one of the most visible and core elements of the entire Open Source world. And its financial success is helped, in no small part, because it's selling a value, ISV application certification against Red Hat Enterprise Linux, that doesn't have the equivalent in layered software products. Other pure Open Source plays have also been modestly successful, but we're certainly not talking Oracle or Microsoft levels of success--nor, indeed, Sybase or SAS levels. Even Red Hat pulls in well under $1 billion in annual revenues, and may also be starting to hit .
Other cases involve long-term trends that almost certainly will have an increasing impact over time. More software is moving out into the network "cloud," and--in an at least peripherally-connected shift--thin clients of various stripes are beginning to move beyond their historical ghettos in call centers and other narrow use cases. However, the oft-cited Salesforce.com and many Citrix case studies aside, these shifts will be far more gradual and incremental than the enthusiasts would have us believe. Enterprises will be slow to adopt Software as a Service for anything they consider even vaguely core and the traditional fat client PC model may be flawed in a lot of ways, but it is familiar, well-understood, and has huge inertia.
I love examples. They help give me confidence that something has at least a patina of reality. But, in the singular, they constitute anecdotes and not data. And anecdotes don't really prove anything. In fact, they can mislead by giving the atypical more weight than it deserves.
Earlier this week, Sun Microsystems launched a family of new servers based on the SPARC64 VII processor. In contrast to Sun's "CMT" (Chip Multithreading) UltraSPARC T1 and T2 designs that deliver aggregate performance using a large number of threads, SPARC64 takes a more conventional approach that is more rooted in parallelism and performance at the level of a single thread. This design is more attuned with the performance requirements of typical enterprise back-end applications and databases, whereas CMT has more of a network-facing orientation.
SPARC64 comes from Sun's partner Fujitsu, which also designs and builds the midrange and high-end servers that use the chip; these systems went by the "APL" codename while they were under development. Fujitsu and Sun jointly sell these servers--as well as the CMT "Niagara' boxes for which Sun does the processor and server development.
The new processor and servers are solid upgrades. Although not as multi-threaded as Niagara, the SPARC64 VII bumps the number of cores per chip to four, and adds the ability to run two threads on each of those cores--a technique that helps mask delays associated with waiting for data to arrive from memory. Frequency is also up from the prior generation to 2.4 GHz and 2.52 GHz.
Sun pegs the performance boost over the prior generation at up to about 80 percent for commercial applications, and up to 2x on apps that are floating point-intensive. That's a nice increment, considering that upgrades from the SPARC64 VI servers require only CPU board upgrades. While I find that vendors often overplay the issues associated with competitors' "forklift" hardware upgrades and other supposed gotchas, there's no doubt that less is more when it comes to making infrastructure changes.
Overall, there's little to fault in this announcement from a product perspective. It's a solid, nondisruptive bump to a product line that--although Sun doesn't break out numbers--must contribute a substantial chunk of its server revenue.
My critique instead relates to how Sun (again) seemed almost bored by this announcement. Yes, there was a press release--it wasn't exactly a stealth launch--but there was certainly none of the mass marketing air cover that Sun (for better or worse) is wont to darken the skies with when it comes to something that it's genuinely excited about. No blog postings from its pony-tailed Blogger-in-Chief. No glitzy roll-out.
Don't get me wrong, many of the things that get Sun's corporate blood flowing such as open storage, OpenSolaris, Project BlackBox, ZFS and solid state disk, and Niagara are genuinely exciting. But many are also speculative. It would behoove Sun to at least make the old college try to display some comparable enthusiasm about products that are proven and bringing in real revenues.
One of the reasons I attend O'Reilly's Open Source Conference (OSCON) is that, more so than others I go to, it gets into the intellectual and--dare I say--philosophical underpinnings of things as well as the things themselves.
To be sure, this sort of thing may not be especially important if we're talking about things like servers--although these too interact with long-term undercurrents such as massively multi-core programming that are largely removed from day-to-day concerns but which are immensely important in the long view. In the case of Open Source, however much it has blended into the mainstream of software, is still also very part and parcel of the history and motivations behind it.
Much of that background, the continuing areas of conflict that are part and parcel of it, hints at how Open Source may evolve, and some of the opportunities (and challenges) of bringing Open Source into domains other than code were on display at the Participate 08 panel discussion yesterday. The complexities of the many interweaving threads are neatly captured in these whiteboards drawn by Collective Next during the panel.
But for our purposes here I'm going to focus on one specific thread. I'll be following up with further discussion of other points.
One of the panelists was John Wilbanks, who run the Science Commons project (within Creative Commons). He had some interesting perspectives on the concerns of scientists, as opposed to programmers. For example, in the Open Source code world, as it has evolved, attribution (at least formal attribution) isn't a component of most licenses. But, in the academic community, it's all about attribution. As he described it: "the motivation is to be associated with the publication of an idea... to own a fact."
This is a potentially huge disconnect between the data/science world and the code world. This is especially so because attribution clauses are not a part of most Open Source licenses for deliberate reason. The problem is that attributions "stack"--that is, they acquire threads of contributors that may go back years. Thus, to have a legal requirement to preserve some list of all that historical accretion of intellectual property would get enormously unwieldy to implement in a practical way.
Academics deal with this sort of thing all the time. However, it's handled within the context of social norms and customs and violations are dealt with largely by corresponding social censures rather than legal ones. Attribution is serious business in academia--but it's not implemented through formal legal strictures that require literature searches for previously unknown Russian papers of 30 years past. (Of course, there are often bruised egos and perceived slights all the time--welcome to the world--but these are issues mostly resolved within a community rather than in a court of law.
As a side issue, John also noted that, in the sciences, he does not recommend that work be limited to non-commercial use or to prohibit derivative (i.e. transformed) use of the work. He said that such restrictions have a very chilling effect on integration and federation. I've written previously about theof some Creative Commons licenses in the context of photographs. Increasingly strictures against commercial use, an area that Open Source code licenses have largely stayed away from to their betterment, seem to be something that appear reasonable and fair but, in fact, have far more cons than pros.
As open data, creative writing and media, and code merge, we're going to increasingly need to reconcile the issues that matter most to the communities who own the copyrights to their respective bodies of work.
Intel perhaps most of all, but a lot of technology vendors are pushing the idea of MIDs (Mobile Internet Devices) and Netbooks (essentially scale-down, low-cost notebooks). Intel's interest here is pretty straightforward: the more a mobile device resembles a traditional PC, the more Intel's x86 franchise gives it a leg-up. By contrast, smartphones are based on any number of low-power processors, typically something other than x86 architecture.
I'm skeptical that these categories between the smartphone and the notebook will amount to a whole lot.
The issue I see with MIDs and Netbooks in the general case, however, is essentially a matter of form factor.
One the one hand, smartphones fit easily in most pockets. The downside is a small screen and text input that is largely by thumb, rather than by finger. Furthermore, because smartphones have historically been built using such a hodgepodge of hardware and software--including browsers--Website compatibility has been spotty at best, even leaving aside the (significant) issues that a smaller screen area introduces.
At the other end of the scale are familiar notebooks. Even the more portable varieties have more-or-less full-size keyboards and screen. Besides relatively high cost and the need to maintain and update a full-fledged operating system on a PC, notebooks weigh a few pounds and pit in a backpack or briefcase form factor--not a pocket, however oversized.
Against this backdrop, one can imagine Netbooks that sit in a kitchen to look up recipes or a MID that functions as a mobile browser and entertainment gadget somewhat in the vein of an iPod Touch. However, these scenarios feel like stretching to me. The cellphone is ubiquitous and highly portable (and smartphone browsers will get better). The notebook is well-suited to keyboard input and rich Website display (and will inevitably get ever smaller and lighter).
What do the alternatives offer?
A MID is a form factor that is neither as portable as a smartphone nor as full-functioned as a notebook. A Netbook is a notebook that is underpowered and otherwise compromised. At a low enough price point, perhaps. But the One Laptop Per Child experience suggests that the most aggressive price points may well be too aggressive to be practical.
In short, at least in a market where almost everyone has a cellphone and notebooks are the full-function PC of choice, it's hard to see the compromises of the MID and the Netbook as anything but too much pain for too little gain.
All that said, I'm now going to do something that used to intensely annoy a former editor of mine who never let the facts interfere with a good argument. I'm going to qualify my skepticism. By analogy, people ride and pedal all manner of vehicles. Some, such as bicycles and cars, are clearly mainstream. A few are true oddballs (unicycles). Some have very specific use cases (two-seater cars). Others are generally uncommon in the US but are relatively common in other locations (scooters).
Perhaps MIDs or Netbooks will emerge as the two-seaters or even the scooters of the computer world. Truly mainstream device? Probably not. But the uber-portable and inexpensive notebook, in particular, could find takers in the developing world or as a third- or fourth household PC in more developed nations. Especially as Moore's Law and other technical advances bring faster processors and bigger storage to even the most entry of price points.
The "Is VMware violating the GPL" question an Illuminata Perspectives that I wrote when this same thing cropped up about a year ago. I've excerpted the most salient points from the original post and added a little updated commentary.(Matt Asay his own post here) so I thought it would be useful to dust off
The basic issue is as follows. As most folks involved with servers know by now, VMware ESX Server is a server virtualization product that allows multiple "guest" operating systems to co-exist on a single physical server independently of each other. ESX provides what is known as "native" virtualization--that is, the VMware software sits directly on the physical hardware; it isn't "hosted" like an application in the manner of, for example, Microsoft's Virtual Server. We usually use the term "hypervisor" to refer colloquially to this layer of software that lets virtual machines be created on top of it.
This is somewhat of an oversimplification, however. ESX Server, similarly to the Open Source Xen Project, actually has two major pieces. One is a virtual machine manager (VMM)--the layer that actually controls the virtual machines. In VMware's case, it's called the "VMkernel" and is proprietary code. The other is a service console that lets the user control and monitor the functions of the VMM. This "Console OS" is based on the Linux 2.4 kernel; it's Open Source under the terms of the GPL.
How Things Work:
The VMkernel and the Console OS are two separate pieces of code. VMware's Zachary Amsden describes how they work (in a comment to the VentureCake blog post that kicked this discussion off last summer):
First, the vmkernel is not a Linux kernel module. The vmkernel is a completely isolated and separate piece of software which is loaded by a module called vmnix. The vmkernel has no knowledge or understanding of Linux data structures or symbols, and as a necessary result, does not depend on the Linux kernel for any services whatsoever.
Second, the vmkernel does not run inside or as part of the Linux kernel. It simply takes over control of the CPU and switches into a completely alien operating mode - one where Linux itself no longer exists. The former kernel used to boot the systems is still alive, but to switch back to it is a complex and involved process, similar to the well-defined copyright boundary of switching between two user processes, which are completely separate programs, running in their own address spaces. The vmkernel and the console OS Linux kernel are two completely separate entities, and the process of going from one to another is even a stronger separation than that given to user processes - more like rebooting the processor and re-creating the entire world.
What's described here is the standard ESX Server product. My understanding is that VMware's "embedded hypervisor," ESXi, uses a Remote Command Line Interface for management that does not rely on a Linux-based foundation.
At issue here is that Linux (vmnix is part of the Linux-based Console OS) is involved with bootstrapping the VMkernel. Does this constitute a form of linkage that would make the entire resulting "work" (i.e. the whole of ESX Server) subject to the GPL--thereby requiring that the source code be made public?
It is also true that some Linux kernel developers, such as Alan Cox, are on record as questioning the legality of loadable kernel modules with proprietary licenses. So, does this mean as Matt suggests, that all these uses are illegal?
This is a corner case that involves a type of linkage and relationship that the GPL doesn't explicitly cover. It's also a type of linkage and relationship between software components that Linux Torvalds, the Software Freedom Law Center, the Free Software Foundation, etc. are well aware of and have explicitly or implicitly decided that it's something they'll tolerate even if they don't enthusiastically endorse it.
But is it legal? Well, yes, until some court decided it isn't. Which is a battle that none of the most relevant players have any interest in fighting.
Twitter extreme eg.--many of us non-users couldn't perceive benefits. Low barrier made it OK to say, "just try it..." Not true w/all things
This is a sometimes overlooked advantage of software as a service (SaaS) in its various forms. Even installing free or trial software can be challenging enough that all manner of virtual appliances and application virtualization have been suggested as possible solutions to this "pain point."
Of course, no barrier is truly zero height. Even signing up with a Web site, getting the hang of the basics, and (perhaps most of all) figuring out how or if it fits into the flow of your lifestyle and work don't just happen. This is especially true when the service in question is new and different. When it makes you approach an activity in a genuinely different way or otherwise shift an established mindset.
New is hard for developers and designers. It's also hard for users.
That said, the freedom to tell prospective users/customers to just press their browser at a URL and "play" is an incredibly powerful concept. Especially when the product in question lends itself better to experience than explication.
Kathy is right that Twitter is one such example. Before I gave it a serious run, I thought it sounded sort of silly. It was actually using it that convinced me otherwise.
Compare and contrast this to the case of TiVo and the digital video recorder (DVR).
TiVo changes how you watch television just as Twitter changes how some people communicate. Aside from some sports and news, I now rarely watch TV live. I almost never just watch "whatever's on." And I often don't even know which channel or night some program is on.
But TiVo the company has always had a great deal of difficulty explaining that transformation of TV watching. Especially early on, a lot of people viewed TiVo as essentially an enhanced VCR--when, in fact, the experience is qualitatively different. TiVo has been a tough sell to consumers because it required them to invest in a pricey piece of electronics for benefits that were hard to understand in the abstract.
DVRs in general only really started to go mainstream when they started to be bundled by the satellite and cable companies. In other words, when the acquisition barriers went down dramatically. And it's not even just about the cost, but about the mental energy and perceived risk associated with baking definitive choices.
Seeing sometimes is believing. But you have to make it easy to take a look.
In October of 2000, I hopped a Las Vegas-bound flight to attend a developers' event being thrown by the InfiniBand Trade Association.
By way of background, InfiniBand was one of the hot technology properties of the pre-bubble-bursting days. It was touted as a better (faster, more efficient) way to connect servers than the ubiquitous Ethernet. Its more vocal backers, of which there were many, went so far as to position it as a "System Area Network"--a connective fabric for data centers. A whole mini-industry of silicon, software, host bus adapter, and switch vendors supported InfiniBand. One sizable cluster resided in Austin, Texas, but there were many of them scattered around the U.S. and elsewhere--to say nothing of significant InfiniBand initiatives at companies such as IBM and Intel.
I don't remember all the details of that past InfiniBand event but it filled a decent-sized hall at the Mandalay Bay and was followed by a party that took over the hotel's "beach" on a balmy Vegas evening.
Last week, I attended another InfiniBand event, Techforum '08. It was also in Las Vegas. More modest digs at Harrah's reflected that InfiniBand hasn't exactly lived up to those past hopes. However, the fact that there even was a TechForum '08 also reflects that InfiniBand is still with us--primarily as a server connect for high performance computing (HPC) applications where low latency and high bandwidth are especially important.
Given that I've been following InfiniBand since its early days, this seems like a good opportunity to reflect on where InfiniBand stands today and where it may be going.
As with another Big "I" technology, Intel's Itanium processor, it's tempting to glibly dismiss InfiniBand as a failure because it failed to live up to early (probably unrealistic) hopes and promises. In fact, InfiniBand now dominates performance sensitive connections between servers in HPC. It's largely taken the place of a plethora of competing alternatives, most notably Myricom's Myrinet and Quadrics' QsNet. Plain old Gigabit Ethernet has successfully held onto its position of default data center interconnect and FibreChannel has remained the default for storage area networks. But InfiniBand has actually been quite successful at establishing itself as the standard interconnect for optimized clusters.
One also finds InfiniBand technology beneath the covers in a variety of products. Among other products, a variety of blade chassis use InfiniBand in their backplanes. This may not exactly be InfiniBand the standard, but it is InfiniBand the technology. And this type of use contributes to InfiniBand component volumes--which tends to drive down prices.
But, what of 10 Gigabit Ethernet? Isn't it inevitable that 10 GbE will replace InfiniBand? Indeed, most InfiniBand component suppliers, such as Mellanox, are covering their bets by embracing both technologies.
But 10 GbE, after many years in development, remains in early days. Costs are still high. The converged 10 GbE that is most relevant to InfiniBand's future sometimes called "Data Center Ethernet" isn't even a single thing. It's at least six different standards initiatives from the IEEE and IETF (not including the related FibreChannel over Ethernet efforts). In many cases, 10 GbE will also require that data centers upgrade their cable plant to optical fiber.
In short, although 10 GbE will certainly emerge as an important component of data center infrastructures, lots of technical work (and political battles) remain.
So does Ethernet conquer all? Maybe. Someday. A lot happens someday. InfiniBand may not ever markedly expand on the sorts of roles that it plays. But 10 GbE is far from ready to take over when latency has to be lowest and bandwidth has to be highest.
A bunch of us were debating over Twitter yesterday whether it's desirable to have separate personal and professional identities on the service. The consensus seemed to be: "it depends." It depends on your professional situation. It depends on how personal and workplace-safe you want your posts. And so forth.
I find this whole question of what I call "identity 2.0" fascinating. Increasingly, there's a blurring line between personal and professional identities--and even between multiple compartments within those buckets.
As Wendell comments in a post: "It's kinda like living in a small town again." There are a lot of analogs. Just as locality and small size break down barriers between public and private in a small town or village, so, too, do the Internet and the search engine.
This is a trend that we're all going to be wrestling with for years to come. Although things I've written back in my college days are readily available online, if you know where to look, it was mostly stuff written for newspapers or Usenet posts.
There are doubtless matters on which I've changed my thinking, but there is probably nothing that I'd find especially embarrassing. What I don't have online--because it didn't exist back then--is "off the record" commentary written purely for a circle of friends. (In Here Comes Everybody: The Power of Organizing Without Organizations, Clay Shirky describes how many blogs are clearly written for a close circle of friends, even though they can potentially be viewed by anyone.)
Wall Street may not be Main Street. Neither are Silicon Valley and its relatives (Research Triangle Park of North Carolina; Cambridge, Mass.; Austin, Texas, etc.). The general sort of "live and let live" attitude toward activity outside of the workplace that may predominate there--as well as among employees who are highly visible bloggers, pundits, and so forth--isn't really the norm.
Suggestions that we do something about the "ephemerality of the Web" would also, to a certain degree, exacerbate any issues. Old Web sites, comment threads, discussion boards, and so forth do tend to evaporate over time, providing a loose statute of limitations. The better we get at preserving the Web for the sake of history, the less likely that youthful indiscretions will vanish into the mists of time.
Of course, much of the Web's most vacuous inanity--think comments on Digg--is cloaked in effective anonymity. (By "effective," I mean that it can often be pierced by legal action, but is anonymous from the perspective of ordinary searches.) Transient anonymity has its own problems. However, a blogging pseudonym--perhaps known to friends--is doubtless a reasonable response in many circumstances.
I, as well as others related to data portability, previously here.
Assessing the open-source scorecard is complicated. A complete "state of open source" would fill many pages. But here are a few things that have struck me over the past year or two.
Large swaths of open source have become mainstream--to the point of invisibility. Jay Lyman summed this up well in the context of the last LinuxWorld. We've also seen large vendors, such as Hewlett-Packard and IBM, generally de-emphasizing Linux and open source as businesses in their own right.
Just to be clear, invisible is absolutely not the same thing as irrelevant. However, some open-source fans who feel the need to ally themselves with a highly visible movement taking on "the enemy" find this shift troubling. (See, for example, "Mike's" comment to the aforementioned blog post.)
Pure-play open source as a standalone business has largely proven to be marginal. There are many successful companies that leverage open source in various ways. But it's the cross-selling of other things--systems, proprietary software, and services, in the case of system vendors, or advertising, in the case of Google--that brings in most of the revenue.
Basic pay-for-support models tend to have low conversion rates and haven't mostly been big moneymakers. (Essentially a form of "FREE 3," to use Chris Anderson's terminology.) I discussed this point earlier in "Does Open Source Have More Value Within a Larger Vendor?"
The Linux desktop remains a niche. There was a time when the desktop looked to be the next great frontier for Linux. That hasn't happened. Ironically, Apple Macs, which are arguably even less open than Windows PC, have been the big desktop winners over the past few years--not Linux.
The record for open source more broadly on the desktop is mixed. The Firefox Web browser has been the OpenOffice.org, have been better at pressuring proprietary software vendors on various fronts (standards, pricing) than at emerging as big winners in their own right. And, today, the action has moved far more toward mobile clients (where Linux is starting to have some degree of uptake) and in software running "in the network" than in the traditional "fat desktop" client operating system.. But other projects, such as
Which brings us to the next point. There's a tension between cloud computing and open source. I cover that tension in much more detail in "The Cloud vs. Open Source" but essentially, most of the open-source licenses that were written to require that modifications and enhancements to open-source software be contributed back to the commons don't apply when software is distributed only in the form of network services, rather than directly in the form of the software bits themselves.
More broadly, as the Free Software Foundation's Software Freedom" principles, to which open source was a means and not an end., the very idea of the cloud can be seen as conflicting with "
Yet for all those points that are either in the debit column or that some would place there, it's hard for me to see how open source could be considered as anything other than a great success. As a model for how software is developed and how people collaborate, open source has utterly transformed IT.
Even when open source hasn't displaced proprietary alternatives, it's helped make things like open beta testing and trial versions commonplace--ubiquitous, even. When was the last time you, as a consumer, bought a software program without giving it a spin first? For me, it's been a long time. Yet buy-before-try used to be the norm.
That open source has fully inserted itself into the mainstream as a result strikes me as a feature, not a bug.
Most of the time, changes in the technology landscape happen gradually. Sometimes we can look back and pick out some inflection point--though, in my experience, such are more about storytelling convenience than anything more concrete. However, at least as often, things just evolve until one day we've clearly arrived in a different place.
Such is the case with open source.
It's gone from being an outsider movement to an integral component of the computer industry mainstream. However, more specifically, it's clearly entered a phase in which pragmatism, rather than idealism, is the reigning ethos.
Matthew Aslett touches on several aspects of this shift in his post: "Open source is not a business model." (His alternative title: "freedom of speech won't feed my children.") His conclusions (from a recent 451 Group report) include the following:
In short, as Matthew puts it: "Open source is a software development and/or distribution model that is enabled by a licensing tactic." That's a far cry from open source as social movement or belief system as predominated early on and still has its adherents today. That's not to say that open-source proponents ever fit neatly into a single mold; Linus Torvalds, the creator of Linux, was always more the pragmatist than the Free Software Foundation's Richard Stallman, for example. However, in the main, we've clearly shifted to a locale where even those who are predisposed to "Software Freedom" as a concept are more willing than in the past to treat open source as just one mechanism among several to develop and distribute software.
In my view, there are a variety of reasons for this change including the following:
Open source has, in a sense, won. By which I mean that it's entered the mainstream and has, to no small degree, heavily influenced how companies do development, engage with user and developer communities, and provide access to their products. Furthermore, the well-established success of many open-source projects (Linux, Apache, Samba... the list is long) makes many of the long-ago barbs thrown at open source (insecure, risky, unsupported, etc.) risible in today's world. Open-source advocates no longer need to jumpstart a software revolution. They can afford to be pragmatic.
And open source has "won" because it's proven to be a good model for development and collaboration in many cases. A lot of the fervor around open-source licensing debates was effectively predicated on a belief that open source had to be protected from those who would strip mine it for commercial ends and kill it in the process. However, today there are plenty of examples of open-source projects that use BSD-ish, anything goes licenses--yet are hugely successful. There remain a variety of implications to using different license types, but we're once again talking more about practical matters than philosophical ones. Few major software companies (including Microsoft) don't intersect with open source to at least some degree.
Business models have had time to play out. At the same time, it's also proven to be the case that, building a sustainable and scalable business around a pure open-source play tends not to work. Many open-source companies have gone down the sell-support-for-the-open-source-bits path. The problem is that not enough customers buy up to the pay version. Thus, companies whose product is built around an open-source project have increasingly moved towards offering proprietary plug-in modules, hosted services, and things of that nature. (MySQL, now part of Sun, being just one example.)
Finally, two words: cloud computing--a term I use to refer generally to running software in the network, rather than locally. Cloud computing is shaping up to be a huge consumer of open-source software. The ease of licensing, the ability to customize, the ability to try things out quickly, and--yes--costs that tend to be lower than proprietary software, all make open source and the cloud a good fit. And cloud computing, beginning with its early consumer-oriented Web 2.0 guises, is where a lot of computing is headed over the coming years.
Richard Stallman, among other open-source purists, has decried this shift because he sees it as a move back to proprietary, centralized computing. There are some legitimate concerns about data portability, privacy, and other user rights in a cloud context. However, to narrowly and uncompromisingly focus on open source's historical roots and structure in a cloud-based world is to both tilt at windmills and re-fight a different war with the weapons and tactics of the last one. Pragmatism isn't necessarily compromise; it's adapting to the world as it is, not as you wish it would be.
For as long as I've been following alternatives to traditional "fat client" desktops, most vendors have been touting thin client and related technologies mostly in the context of better return on investment (ROI).
They'll admit that up-front costs are higher. They'll even reluctantly concede that the user experience (in the sense of response time, adding a unique application, and so forth) may not be as good as for a traditional PC. But, the pitch goes, management costs will be so reduced that you'll make back your money.
As for the users? Well, so long as the thin client pitch has been mostly about gear for call centers and the like, it's hardly surprising that IT buyers often haven't put of a premium on richness of experience for that class of user. It's about the basic function.
The resulting business that this approach has driven has been respectable enough--especially for Citrix--but it's been fundamentally niche-y. Something for specific uses and users, rather than something broader.
One reason is that, to be frank, a lot of buyers don't believe ROI claims. The size of an up-front check you write is something tangible. Purported savings over the next three years? Not so much. Especially given that the savings are often "soft costs" that posit things like lower management costs or higher user productivity. Vendors may not be able to justify literally anything with the right ROI study. But they can try.
Moreover, justifying thin client computing strictly on a cost basis depends on these sorts of soft cost savings. After all, in a typical thin client architecture, you still need a desktop device (with a hardware bill of materials that isn't really all that much slimmer than that of a regular PC) plus you need all the back-end servers and software to deliver applications.
As a result, suppliers of complete (hardware/software/services) thin client solutions have started emphasizing two other benefits of thin client computing: compliance/security and user experience benefits.
The compliance and security aspect is pretty obvious. If data and applications aren't stored locally on a user's PC, they can't "walk" out the door. And, in general, it's pretty commonsensical that centralized applications and desktops would be easier to control whether we're talking software licensing or enforcing data retention policies.
In fact, the only thing that surprises me is that vendors didn't more widely focus on this aspect on thin client computing before now. To be sure, there's a broader awareness of data security issues, more compliance regulations, and more remote contract workers today. But ClearCube, an early "Blade PC" company, built its business largely on demand from three-letter government agencies and others for whom security was a front-and-center requirement. So antecedents were there to see.
It also shouldn't be a surprise that the historical "cheaper but not as good as a PC" storyline around thin client computing never had a whole lot of grassroots support. However, today, we have faster networks (both wide area and local area); this helps at the infrastructure level. Perhaps fundamentally, we're starting to see a variety of application and desktop virtualization approaches.
The specifics differ considerably but these new (and "reimagined") forms of virtualization collectively focus on delivering applications and operating systems to a user PC in a controlled way. For IT, this means a thin client-like degree of centralized management. But users still have a conventional desktop--or even notebook--so they retain the PC experience. And that experience can be even better to the degree that their operating system and applications can be easily refreshed ("de-crappified" to use the technical term).
Think of it as a sort of hybrid client model. In fact, this model has even broader implications in that it means that IT can selectively control and wall off parts of a PC without necessarily taking control of the whole thing.
This, in turn, means that many of the past justifications for PCs as locked-down corporate assets no longer apply. But that's a topic for another post.
In most enterprises, PCs are what the accountants call a "corporate asset." The company buys them, loads software on them, sticks on a little asset tag, and lets employees use them as tools for their jobs. A given IT department may have more or fewer formal processes--or may simply be more or less control-freakish. But, whether employees get much choice in choosing a preferred PC model and whatever IT's general attitude toward running "unapproved" applications, the PC is company property.
There are lots of historical reasons for this general approach. Desktop PCs sat in an office or a cubicle and were clearly part of a company's physical plant. At least as importantly, PC management (and security) of those PCs was largely predicated on controlling the entire PC software image. Not that managing this way was particularly effective or easy but there were few tools to deal with applications at a more granular level. Finally, even after a PC became a standard fixture of the corporate desk, that didn't mean they were in every home--nor that everyone was exactly "computer literate."
It would, of course, be silly to say all that history is now part of some dead past. However, we're starting to see a variety of intersecting changes that make it much more thinkable that IT shops could at least partially divest themselves of their PC supplier role. Instead, the idea is that employees would just use their own personal systems. There might be stipends; there might be negotiated bulk purchases that people would have the option of hooking into. IT would still be on the hook for at least corporate application support. But, whatever the details, it would be a very different way of thinking about PCs.
(Think of the following discussion specifically in the context of what are often called mobile professionals: executives, sales reps, developers, and so forth.)
Pricing, notebooks, and ubiquity. Pricing has come down, many PC buyers are going mobile, and "everyone" (in the group I'm talking about here) has a PC and is pretty comfortable with using it. That's a broad-brush statement, of course. But the bottom line is that PCs, and even notebooks, are something that people routinely own and use independently of however they use them for business. There are a lot of analogies to mobile phones here; early car phones and even their pocket-able successors tended to be company supplied. Today--especially if we're talking about basic voice phones--employees mostly just use their own.
Changing nature of work and workforce. More mobility, more contract or project-based workers, more blending of the personal and the professional. Collectively, these mean that it's increasingly impractical for IT to give someone a notebook with the stern admonition that "this shall only be used for official company business." It's not surprising that one IT manager I've spoken with who is pushing this personal PC theme particularly hard does indeed work for a company which brings in lots of consultants, local experts, and so forth on a project-by-project basis.
Changing security models. Increasingly work doesn't happen from a single location or a particular device. Furthermore, in part for the reasons noted above, a largely binary approach to security that distinguishes between those inside the "moat" and those outside doesn't really work any longer. And once you open the door to partners and others accessing your infrastructure using equipment that isn't locked-down by your IT department, you pretty much have to move to a model that does security on the basis of user roles and permissions rather than depending on specific device characteristics.
Virtualization, Rich Internet Applications, and other application delivery mechanisms. Finally, and by no means least, today there's an increasingly rich set of tools that provide ways of keeping corporate applications or even complete operating system images isolated within a personal PC. VMware ACE is an example of a product that is specifically designed for this purpose. However, because many software services are now delivered through a Web browser, in many cases users don't even need to directly connect to a corporate network in the usual way.
My point here isn't that every PC in an organization will get personal. In fact, as I've written about previously, I'm seeing a lot of interest in thin clients (which are inherently part of a formal IT infrastructure) for security and other reasons. But we are starting to see a shift toward more personal PCs as notebooks become ubiquitous, application access becomes less hardwired, and security shifts from device-based to user-based.
. And that's likely where Canonical, the commercial entity behind Ubuntu, will earn its profits-- .
But its initial efforts on the client side arguably are what really helped shift the limelight to Ubuntu in the first place. Ubuntu gained the reputation of being easier to install and use than other Linux distributions--factors that have kept even many open-source enthusiasts from adopting Linux on their desktops or notebooks. And user experience remains a significant focus area.
Mark Shuttleworth, who heads and financially backs Canonical, is on record with comments such as "I think the great task in front of us in the next two years is to lift the experience of the Linux desktop from something stable and usable and not pretty, to something that's art." Or more broadly, to surpass Apple, in terms of desktop experience.
I strongly suspect that there are inherent trade-offs between the flexibility and choice associated with open source, and the unified approach (epitomized by Apple) that tends to be associated with good user interface design. But the bigger issue with mainstreaming the Linux PC has nothing to do with design and everything with where we are in technology history when it comes to accessing and interacting with software.
Writers of heavyweight client applications (think Adobe Systems' Photoshop, for example) don't want to support additional operating systems. Getting the latest versions of applications for its platform is even a challenge for Apple--resurgent sales and market share notwithstanding.
While there's lots of open-source software for Linux clients, there's a very modest amount of closed-source software available. This is not especially a knock on Linux, per se--though low software costs certainly contribute to Linux's attraction in some cases--but rather reflect the decades-long winnowing of the number of platforms that software vendors are willing to support.
There's also a general maturation of the PC operating system. Linux desktop distributions, Mac OS X, and--dare I say it--Windows are far more alike than they are different. You may choose one over the other to make an ideological or stylistic statement, to gain access to specific applications, or just as a matter of personal preference. But both differences and advances are increasingly at the margins.
I think we see some of this in the relatively slow take-up of Vista. The Microsoft haters blame Vista; the blame at least equally sits on the reality that Windows XP is a good enough desktop operating system for most purposes.
In short, I just don't see a lot of enthusiasm for another desktop operating system in the Windows or Mac OS X mold. This is especially so because it represents the past in many ways. Many new applications are running in the network, and the client--in its myriad forms, from desktop to smartphone--is merely a portal to access them.
In a sense, this is an opportunity for Linux. In a world where all you need is a browser and some other standardized client components, why not Linux? And, indeed, I expect that we'll see Linux on a lot of thinner clients, where it will act more as the underpinning for a browser than as a more generalized operating environment.
But I think that it is important to distinguish this from Linux, the desktop OS--as that term is normally used. This isn't about running games or editing movies on the latest quad-core Intel processor. This is about powering lighter-weight clients in which the operating system--and, especially, the general application support enjoyed by any given operating system--just doesn't matter very much.
Earlier this year, I expressed my skepticism that Mobile Internet Devices (MIDs) and Netbooks (essentially scaled-down, low-cost notebooks) would come to pass as mainstream product categories. My reasoning boiled down to an assertion that these things were neither fish nor fowl. As usually envisioned, a MID is a form factor that is neither as portable as a smartphone nor as full-functioned as a notebook. A Netbook is a notebook that is underpowered and otherwise compromised.
I've seen nothing over the past few months to change my mind about MIDs. If anything, Apple's continued march with the iPhone and the work going on around Google Android have me more convinced than ever that the browser-equipped smartphone is the future of truly mobile computing. (There are a lot of interesting dynamics here related to carrier hardware subsidies and the desire of carriers to lock down and restrict use in various ways, but those are topics for another day.)
Netbook sales, on the other hand, have been strong. In fact, they're driving a lot of the worldwide growth in PC sales. So, are we, in fact, seeing the emergence of a new product category--something that doesn't happen very often?
We are seeing a lot of consumer interest in very portable computers that are economy-priced. Economy pricing is really what's new here. Historically, companies have paid big premiums to get the most portable notebooks for their road warriors with the goal being to give up as little function as possible in service of light weight (and, to a lesser degree, small size).
Some things about Netbooks do indeed look like a new category of product. The first is that a lot of the people purchasing these devices are individuals, not businesses. In many cases (especially in the U.S.), they're intended to supplement--rather than replace--another desktop PC or a higher-end notebook. A second thing is that, especially at the entry level, Netbooks tend to have differences of kind, and not just degree. They run Linux and Windows XP, not Vista. They're explicitly intended to access Web-based applications through a browser or to run some basic productivity software locally; they're not general purpose. And they use less power-hungry, but less powerful, processors such as Intel's Atom.
However, I wonder if the apparent bright line distinction from other notebooks isn't a temporary phenomenon that will soften over time. Memory gets denser, processors get faster, LCDs get cheaper. Some of these Moore's Law-fueled advances could indeed continue to push the entry level of the notebook market down in price. Perhaps we'll even have a $100 laptop that only costs $100 some day. But I strongly suspect that a lot of that technical advance will also go into beefing up the capabilities of notebooks in the sort of price band that a lot of consumer electronics sell for--say, sub-$500.
Ultimately, I'm less convinced that we're seeing the emergence of a truly distinct product category than that we're seeing the continued downward movement of not only notebook entry pricing, but entry bulk as well. Besides, however fond IT industry people are at chopping markets into named categories, as a fellow analyst said at a recent meeting: "the average consumer calls everything a laptop anyway."
At some point during the flight over the Pacific from Tokyo, I seriously questioned my decision to take a detour rather than heading straight to Boston and home. It wasn't that I had no interest in attending the Supercomputing show, SC08, being held in Austin last week. It's just that I was coming off of what was already a two-week trip to Japan. However, Supercomputing has been getting more and more buzz in recent years--and I hadn't been able to attend previously because of conflicts--so duty beckoned.
I was glad I made it. It was an immensely interesting and educational (albeit exhausting) couple of days. What follows are a few things that caught my eyes and ears. I plan to follow up on at least some of these in more depth when I have a chance.
Energy and attendees. First of all it's worth noting the general ambience of the show. It was hopping. Economic slump you say? One wouldn't know it from walking the exhibit floor or attending the sessions. To be sure, both booth and attendance commitments are often made well in advance. Nonetheless, I find it striking that SC08 set an attendance record--over 10,000 people--and that a lot of the exhibitors I spoke with were not only happy about the level of traffic to their booths and meetings, but were, in many cases, actually closing business. I found the general feel of the show to be at least somewhat reminiscent of a long-ago UniForum--albeit with more of an academic and application flavor.
InfiniBand is very much alive. I wrote after the October TechForum '08 event that "InfiniBand may not ever markedly expand on the sorts of roles that it plays. But 10 Gigabit Ethernet is far from ready to take over when latency has to be lowest and bandwidth has to be highest." The biggest of those roles is high-performance computing (HPC) and, indeed, InfiniBand was omnipresent at SC08. No particular surprise there but certainly lots of confirmation that InfiniBand is anything but dead. Also significant was QLogic's announcement at the show of an InfiniBand switch family. What's notable is that these switches use QLogic's own chips, rather than sourcing them from Mellanox as everyone else does. That QLogic made this design investment must count as a considerable vote of confidence in InfiniBand's future.
Clusters continue their advance. Supercomputers used to be largely bespoke hardware designs specifically constructed for HPC tasks. There's still some of that. IBM's Blue Gene is one example. A start-up, SiCortex, exhibiting at the show provides another. However, in the main, supercomputing continues to be more and more about clustering together many--mostly standard off-the-shelf--rackmount or blade servers rather than creating monolithic specialized systems. This isn't a new trend, but it continues apace (and is certainly one of the reasons that InfiniBand has been regaining visibility of late).
Microsoft makes modest gains. Microsoft made it into the top 10 of the (publicly acknowledged) largest supercomputers with the Dawning 500A at the Shanghai Supercomputer Center. There was still far more Linux--and, to a lesser degree, other flavors of Unix--at the show than Windows. But this example and others help to reinforce the notion that Microsoft products are technically capable of playing in HPC. That's not to say that Microsoft will easily insert itself into environments that are predisposed to and have in-house skills aligned with Unix tools and techniques. However, as HPC in commercial environments becomes increasingly common, it means that Microsoft has an opportunity there, where Windows typically already has a footprint.
Parallel programming is still a challenge. So much so that all-around computing guru David Patterson devoted his plenary session to the topic. That said, based on Patterson's session as well as the work of a variety of companies such as RapidMind and Pervasive Software, we may be starting to see at least the outlines of how developing for processors with many cores and for amalgams of many systems might progress. The issue is that parallel programming is hard and most people can't do it. One approach is training but we seem to be developing a consensus that neither this nor new programming tools (e.g., languages) really get to the heart of the matter. Rather, the general direction seems to be toward something you might call multicore virtualization--the abstraction of parallel complexities by carefully crafted algorithms and runtimes that handle most of the heavy lifting. (MapReduce is a good example of the sort of thing I'm talking about.)
Supercomputing and HPC used to be their own world. Increasingly they illuminate the future direction of all (or at least most) computing--including the challenges ahead. That's a big reason that I find Supercomputing such a fascinating show.
Not everyone considers this a good thing. But it's the reality of a development model and licensing approach that's gone mainstream, depends in no small part on corporate patronage, and is now widely viewed as simply an efficient approach to developing many types of software. What's struck me recently, however, is not just a cooling of some of the passion around open source as a social movement or alternative to commercial software. Rather, it's what feels like a general and widespread acceptance that business models built around pure play open source simply don't work for the most part.about how pragmatism is trumping ideology throughout open source.
Stuart Cohen, who used to head the Open Source Development Labs (one of the predecessors to the Linux Foundation), had this to say in BusinessWeek:
For anyone who hasn't been paying attention to the software industry lately, I have some bad news. The open-source business model is broken.
Companies have long hoped to make money from this freely available software by charging customers for support and add-on features. Some have succeeded. Many others have failed or will falter, and their ranks may swell as the economy worsens. This will require many to adopt a new mindset, viewing open source more as a means than an end in itself.
In a recent podcast, Matt Asay (GM of Americas for Alfresco) and Dave Rosenberg (co-founder of MuleSource) express a similar point of view. What's striking to me isn't just that folks with strong open-source credentials are making such statements but that, for the most part, people don't find these conclusions especially radical or contestable.
When I say "pure play open source," I'm referring to business models in which a company's products are open-source software and only open-source software. (From the context, I take it that Cohen means something similar when he refers to "the open-source business model.") Such a business model depends on selling support for open source bits given that there aren't hardware or proprietary software sales to subsidize open-source development as a sort of loss leader or complement to the stuff that really brings in the money. (IBM has perhaps most notably leveraged investments in Linux and other open-source projects in this way.) There are various tweaks on this basic approach but they all boil down to driving adoption and building community with open source and then monetizing through support contracts when the software goes into production.
It's hardly a wrong-headed idea. Plenty of companies use open-source software for many tasks and some do indeed want a single point of support for that software--especially if it's being used as part of particularly critical systems.
However, successful models aren't just about reasonable notions, or even reasonable notions that play out in practice to some degree. Rather, they're about numbers. It's not enough to sell something. You have to sell enough of it at a high enough price to turn a profit. And this is where pure play open-source approaches have mostly fallen down. Some companies buy support, but not enough do.
It's certainly a problematic restriction, as things stand. Unfortunately, Creative Commons appears to be going down the path of merely defining it more crisply when, in my view, the better approach would be simply to eliminate it entirely.
First, a little background. Creative Commons licenses are a sort of counterpart to open-source software licenses that is intended to apply to things like books, videos, photographs, and so forth. There are a variety of Creative Commons licenses worldwide (e.g. these are the choices offered on Flickr), but for our purposes here, one important distinction is between the licenses that allow commercial use and those that do not.
A noncommercial license means: "You let others copy, distribute, display, and perform your work--and derivative works based upon it--but for noncommercial purposes only."
The problem Creative Commons is trying to solve is that noncommercial turns out not to be easily defined. I've discussed this issue in, but essentially, we operate in a world where opportunities to "microcommercialize" through Google AdSense and self-published books abound. So drawing a line--especially one that the content creator and the content user can agree on without too much thought--is hard.
See this comment from an earlier post, for example. ("Commercial" is a particularly confusing term, with respect to photography, where it refers to uses that aren't primarily editorial or artistic, and involves requirements for model releases and the like--which is only incidentally related to commercial use, as Creative Commons uses the term.)
It's not hard to see how we came to have such a noncommercial-use clause. There's a certain visceral appeal to saying, "I'll share my creative works with the world, and anyone can use them for free, so long as they credit me and don't make money off them. If they do make money, I want my cut or have the right to prohibit use."
As I say, appealing. Also not very workable or useful. A lot of truly personal and noncommercial uses are already either likely covered under Fair Use or are trivial. (Does it really matter which license the photo you downloaded to use as desktop wallpaper for your computer uses?) And prudent companies will ensure that all rights are in order by contacting the content owner directly, no matter what the license says.
I find it notable that no major open-source software license contains restrictions about who may use the software. Different licenses have more or fewer requirements about the circumstances under which you must contribute code enhancements back to the community or on actions you can't take (for example, related to patents) if you wish to retain your license. But they don't differentiate between whether you're a Fortune 500 corporation, a school, or just an individual playing around for fun.
If open-source licenses did routinely have clauses governing who could and couldn't use software, I think that it's fair to say that open-source would have had a much smaller impact on the world than it has.
, by contrast, Creative Commons licensing offers up a complicated set of options that seem calculated to encourage people to contribute works to the commons while not pushing their envelope to allow any uses that they might consider "unfair" in some way. The result is a system that is far too complicated and that doesn't offer any real benefit beyond a simple license that requires 1.) attribution and 2.) downstream derivatives to maintain the same license.
Complexity, ambiguity, and lack of awareness are the issues with Creative Commons. Tweaking the signage associated with the overly complicated smorgasbord of options doesn't address any of those things.
I'll leave speculation about the back story behindRather, I'd like to poke a bit further at what this says about the trade show business. ZDNet's --to others. Sam Diaz writes:
I hadn't really thought too much about it, but it only makes sense that the Internet's next victim would be the trade show. Think about the outreach tools that companies have at their disposal these days.
Webcasts have become online events where people from around the globe can attend without booking a flight, hotel room, or restaurant reservations. Viral videos are being produced by companies to showcase their products and technologies in real-world environments. Brand names are creating loyal followings via "fan memberships" on social-networking sites such as Facebook. And, increasingly, there are smaller, intimate shows that cater to crowds with specific interests--conferences dealing with social networking, cloud computing, open source, and more.
Those shows reach the audiences they want to reach, and the bank doesn't have to be broken to participate. But what a devastating blow to local economies.
I don't disagree with any of this. Webcasts, viral marketing, and so forth do indeed offer additional, and much lower-cost, ways of reaching out to customers, partners, and developers. And, in Apple's specific case, it doesn't especially strain credulity to at least accept that Macworld is no longer as good a marketing fit as it once was. However, if one takes the broader perspective, I'm not at all sure that this says all that much about the trade show business in general.
That's because the trade show business has always been a bit of a racket. A former boss regularly complained about the money he wasted on trade shows in which he had to participate. And that was more than 10 years ago.
Companies often effectively have to exhibit because it's expected. (Hmm. ACME isn't at the show this year; it must be in trouble.) Participation might also be seen as a cost of doing business with an important partner. (Want Oracle to work with you? Better exhibit at OracleWorld.) There isn't necessarily a quantifiable return on the investment.
Inertia and general politics are other factors. Lots of groups both inside and outside of companies have a strong vested interest in keeping the trade show gravy train going. And that includes, as much as anything, attendees, for whom shows can be as much about getting out of the office for a week as they are genuine business value.
That's not to say that the real-life interaction that happens at these events has no value. Anything but. For me, one of the greatest values of shows is that they offer a convenient focal point for lots of face-to-face discussions, both formal and less so.
In fact, I have this pleasant fantasy that the IT industry could replace its most lumbering shows with get-togethers in nice locales. No need for all the big exhibits at the expensive, antiseptic convention centers. Throw in some unconferencing. (One example somewhat along these lines in Sun Microsystems' CommunityOne. It will be interesting to see how CommunityOne East fares, given that it marks the first time one of these events has been run independently of JavaOne.)
But the reality is that there's a natural tendency toward structure in such things. I'm sure that we'll all have plenty more opportunities to partake of bad convention center food.
I don't like the term "private cloud." My reason is straightforward. The big-picture concept underpinning cloud computing is that the economic efficiencies associated with megascale service providers will be compelling. And, conversely, because they lack the scale of big providers, local computer operations will operate at a significant cost penalty.
To use the electric-utility analogy popularized by Nick Carr and others, efficient power generation takes place at a centralized power plant, not at an individual factory or office building.
There's ongoing debate about just how important these scale effects are and what form, exactly, they take. However, if one accepts this fundamental premise of cloud computing, then the future of computing lies predominantly in multitenant shared facilities of massive size. (Size here refers not necessarily to a single physical facility but to a shared resource pool that may, and probably will, be geographically distributed.)
In other words, a "private cloud" lacks the economic model that makes cloud computing such an intriguing concept in the first place. Put another way, the whole utility metaphor breaks down.
This is not to say that all computing will take place off-premises through these large service providers. In fact, there are lots of reasons why a great deal of computing will continue to happen locally.
For example, Chuck Hollis, global marketing chief technology officer at EMC, writes in The Emergence Of Private Clouds:
IT organizations and service providers that use the same standards will eventually be able to dynamically share workloads, much the way that's done in networks, power grids, and distribution today.
Fully virtualizing traditional enterprise IT internal resources creates substantial advantages--that much is becoming clear.
And if you're an outsourcer or other IT infrastructure service provider, the advantages of virtualizing your capabilities to do multitenancy better is probably clear as well.
And in a post titled "James Urquhart of Cisco Systems (and a fellow CNET Blog Network blogger) argues that:
Disruptive online technologies have almost always had an enterprise analog. The Internet itself had the intranet: the use of HTTP and TCP/IP protocols to deliver linked content to an audience through a browser. The result was a disruptive technology similar to its public counterpart but limited in scope to each individual enterprise.
Cloud computing itself may primarily represent the value derived from purchasing shared resources over the Internet, but again, there is an enterprise analog: the acquisition of shared resources within the confines of an enterprise network. This is a vast improvement over the highly siloed approach IT has taken with commodity server architectures to date.
The result is that much of the same disruptive economics and opportunity that exists in the "public cloud" can be derived at a much smaller (scope) from within an enterprise's firewall. It is the same technology, the same economic model, and the same targeted benefits, but focused only on what can be squeezed out of on-premises equipment.
I do have a couple of quibbles:
But I mostly agree with the overall sentiment of these posts.
Applications and services will continue to run both inside enterprise firewalls and in the cloud for reasons of technology, switching costs, and control.
On the technical front, many of today's applications were written with a tightly coupled system architecture in mind (for example, a high-performance fibre channel disk connected to large SMP servers) and can't simply be moved to a more loosely coupled cloud environment.
For existing ("legacy") applications, there's also the switching cost and time to move to a new software model. In fact, one of the big arguments for standardized, outsourced IT--allowing companies to focus on their competitive differentiators--can also argue against making investments to change functional software systems (and their associated business processes), especially if the financial benefits are long-term and somewhat amorphous.
Security and compliance are also major concerns today. We can argue about the degree to which they're justified. But ultimately, perception is reality.
And there is a certain convergence between how many applications run in the cloud and how they run in the enterprise. Web standards and virtualization are major drivers here, and they certainly make a degree of interoperability and mobility between enterprise and service provider (over time) entirely thinkable.
Existing applications (and operational procedures associated with them) change slowly, and many of them will continue to run inside corporate firewalls as a result. We'll also start to see "federated" and "hybrid" architectures that bridge the enterprise data center and the shared-services provider. Cloud computing will evolve in concert with enterprise applications, not in isolation from them.
But we shouldn't lose track of the fact that cloud computing is posited to be a disruptive change to the computing landscape. If that is the case, then the "cloud" moniker shouldn't be slapped onto evolutionary changes to the way we run applications.
Micropayments are once again being broached as a way to pay for online content. It's not that micropayments have a proven track record. Some of the recent pieces on this topic include:
I'm with the skeptics here even if I don't fully agree with all the individual arguments. I'm certainly not opposed to the basic idea. In fact, I think it mostly unfortunate that we don't have more viable means to directly monetize the creation of valued content. But the evidence suggests that micropayments don't work in the main.
There are a lot of separate threads to this discussion. I'm going to try to tease some of those apart here.
We see the iTunes Music Store (ITMS) presented as an example of a working micropayments scheme. I don't buy it. For one thing, the amounts involved ($1 give or take) are more in the realm of what I call "midi-payments." I think of micropayments more in the vein of a penny, a nickel, or a dime--what I might pay, at least in principle, to read a magazine article or a newspaper column. For another, songs are different from short written pieces. We've generally heard them before and we're buying them because we want to listen to them again--many times.
The transaction costs problem. One issue though is that an approach that requires people to make lots of very small purchases also carry a lot of transaction costs. Clay Shirky, among others, has long raised this objection against micropayments. Some of these transaction costs can be mitigated with technology (the actual cost in gear, bandwidth, etc. to process the transaction and the ease with which the payment can be made). However, the buyer still needs to make a lot of individual purchase decisions. It's well recognized that there's a huge gap between free and cheap (even very cheap) when it comes to buying things.
But subscriptions mostly don't work either. Based on my own behavior, I'm sure there's something to the transaction cost argument but it's unclear to me that alternatives to "nickel-and-diming" (to use Sharky's phrase) work in a broad way either. Subscription-based music services like Rhapsody haven't been especially popular. (Although I use and like it.) And the examples of at least reasonably successful online subscription content, such as the Wall Street Journal, strike me as exceptions that bring very specific value to a population that's willing to pay.
Micropayments are a worse fit with today's Web environment than during their first boomlet in the dot-com era. Part of this is simply that people have gotten used to free news and other content on the Web. There are also more sources of news than ever--albeit much of it duplicative and often relying on major news organizations for source material. However, more broadly, linking and search have become such fundamental drivers of traffic that anything behind a pay-wall (as subscription-only content inevitably is) will take a huge traffic hit. This makes such content less relevant; it also hurts ad revenue.
I agree with those arguing that micropayments are again raising their head not because changes in technology or consumer behavior now make them viable. Rather, the alternatives have largely failed as well. Yes, online advertising is ubiquitous and it does bring in revenue, but the level of that revenue often seems more attuned to blogs cranking out quick-hit posts than to traditional news organizations with investigative reporters, editors, and so forth.
I don't claim to know which model or models will click for newspapers and their successors, what that industry will evolve into, and what we all may collectively give up because we're not willing to pay for certain things. But chalk up micropayments as an idea that seems really appealing but just doesn't work in the general case.
By now, most people involved with IT are familiar with at least the broad outlines of cloud computing--the idea that applications run somewhere out in the network. We just get back data streams or Web pages; the actual crunching, connecting, and correlating happens somewhere else.
Plenty of people, including myself, have taken cuts at defining cloud computing with a bit more rigor. I've come to believe that particular exercise can still be useful for thinking about different use cases and different market segments, but I don't expect we'll ever see a canonical definition. Too many people have too many different perspectives--and particular interests in having some aspects, say "private clouds," be viewed in a particular way.
However, specifics of the cloud-computing taxonomy aside, it's worth noting that the vision of cloud computing, as originally broached by its popularizers, wasn't just about more loosely coupled applications being delivered over networks in more standardized and interoperable ways--a sort of next-generation service-oriented architecture, if you would. Rather, that vision was about a fundamental change to the economics of computing.
As recounted by, among others, Nick Carr in his The Big Switch, cloud computing metaphorically mirrors the evolution of power generation and distribution. Industrial-revolution factories--such as those that once occupied many of the riverside brick buildings I overlook from my Nashua, N.H., office--built largely customized systems to run looms and other automated tools, powered by water and other sources.
These power generation and distribution systems were a competitive differentiator; the more power you had, the more machines you could run, and the more you could produce for sale. Today, by contrast, power (in the form of electricity) is just a commodity for most companies--something that they pull off the grid and pay for based on how much they use.
Some companies may indeed generate power in a small way--typically as backup in outages or as part of a co-generation setup--but you'll find little argument that mainstream power requirements are best met by the electric utility. The Big Switch argues that computing is on a similar trajectory.
And that posits cloud computing being a much more fundamentally disruptive economic model than a mostly gradual shift toward software being delivered as a service and IT being incrementally outsourced to larger IT organizations. It posits having the five "computers" (which is to say complexes of computers) in the world that Sun CTO Greg Papadopoulos hyperbolically referred to--or at least far, far fewer organizations doing computing than today.
Such an IT landscape would look very different--profoundly affecting, just for a start, any vendor competing in it. And that's without even discussing all the regulatory, privacy, and control of information issues that would assume great prominence.
It's an intriguing and big argument, and one well told. I've also come to think it's mostly wrong--at least for any time values that we care about as a practical manner.
I'm emphatically not arguing against cloud computing in the small-"c" sense. Computing is getting more network-centric. Check. Less tied to the physical infrastructure it was initially installed on. Check. More dynamic. Check. More modular. Check. And so forth. Check. Check. Check.
In fact, I even expect that we will see a pretty large-scale shift among small and medium businesses away from running their own e-mail systems and other applications. As we've already seen among consumers--Google search and applications and Web 2.0 sites are all aspects of cloud computing.
And there are economically interesting aspects to this change. No longer do you need to roll in (and finance) pallets of computers to jump-start a company; you go to the Web site for Amazon Web Services. One implication is lower barriers to entry for many types of businesses.
But that's not the sort of near-term economic shift that the electric grid brought about. Rather, it made both unnecessary and obsolete the homegrown systems of the enterprises of the day. And it did so relatively quickly.
And that is what I don't see happening any time soon, on a substantial scale with cloud computing. So far, there is scant evidence that, once you reach the size of industrialized data center operations (call it a couple of data centers to take care of redundancy), the operational economics associated with an order of magnitude greater scale are compelling.
for server virtualization directly into the hardware, something called an embedded hypervisor, hasn't taken off to any significant degree.
Rather, most IT shops continue to purchase virtualization as a third-party add-on (typically from VMware or Citrix), or they acquire it as part of Linux distribution or Microsoft Windows.
Many of the management and other services associated with virtualization are going to be added on, in any case. However, the thinking of a lot of people went, wouldn't it make sense to at least get the foundation in place as part of the server purchase, given that we're seeing more and more interoperability between the various hypervisors and the software that exploits them?
Since writing that piece, I've received a variety of interesting comments, and had some discussions with IT vendors and others I thought worth sharing.
Reader rcadona 2k commented:
Adopting a hypervisor is an active choice or, in most cases, a surrender of your hardware. Embedded hypervisors aren't just a BIOS; they require formatting your storage a particular way (VMware VMFS, Hyper-V NTFS, LVM/raw LUNs for Xen). The virtual BIOS features amongst hypervisors for the guests are not standardized, and the virtualized guest devices are not standardized. When you pick a Type-1 hypervisor, you lock yourself into another "platform."
Some good points here. We have a a bad habit in the IT industry of using the word "commodity" when we really mean things along the lines of "widely used with variants available from multiple sources" (and, therefore, relatively low-price). Hypervisors are an example of this. They all do roughly the same thing. There are a variety of suppliers. And the price for base-level hypervisors has been sliding toward zero.
But they're not commodities. For all the interoperability work that has been taking place at the management and services layer, there remain significant product differences that affect things as substantial as an IT shop's storage architecture. Some of these will go away--or at least be abstracted away--over time, but not all necessarily will.
Given that the choice of hypervisor still matters in such important ways, it's understandable that people continue to buy them primarily as an explicit component of the broader virtualization software ecosystem that depends on them.
Another feedback theme was just that we're still in the early days of virtualization. Perhaps most notably, when VMware rearchitected ESX Server to create the embedded ESXi version, not all the capabilities and features carried over. (Without going into all the details, the full ESX uses a Linux-based service console to manage the hypervisor; ESXi does away with this and is much thinner as a result. However, the current iteration of ESXi doesn't fully replicate all the capabilities provided by that console.)
However, the VMware partners that I've spoken with fully expect that upcoming ESXi versions will soon reach parity with the older ESX architecture and that this will therefore cease to be a reason to shy away from the embedded approach.
I remain skeptical that embedded, just-built-in hypervisors are going to become the norm that it once seemed they would be. If nothing else, Microsoft's Hyper-V--most likely predominantly installed as part of Windows--will tend to hold sway in Microsoft-centric environments, of which there are many.
At the same time, it's too early to write off the idea of embedding hypervisors just because the idea hasn't gained a lot of initial momentum.
Here's the basic question: where does the hypervisor--the software layer that underpins server virtualization--live and who owns it? Is it just part of the server or is it just part of the operating system?
For now, to be sure, it's often something that IT shops purchase from a third-party--we're mostly talking from VMware here. However, pretty much everyone expects that over time this foundational component will be increasingly built-in--even if the higher-level value-add management and virtualization services that make use of it are explicitly purchased from a variety of sources.
Virtualization vendors have often considered this an important question.
A few years back, I had written a piece about how Novell and Red Hat were adding the Xen hypervisor to their Linux distributions. And that Microsoft had made clear its intention to add virtualization to Windows--technology now known as Hyper-V. In short, virtualization was starting to move into the operating systems of a number of vendors.
Well, that notion didn't sit well with Diane Greene--then CEO of VMware--as she made clear to me by coming over and grabbing me by the lapels(only somewhat figuratively) at an Intel Developers Forum event. From Diane's, admittedly biased, perspective the hypervisor should be independent of any single operating system. I hadn't said otherwise. But I apparently didn't make the opposing case enthusiastically enough.
At the time, VMware ESX Server (its native hypervisor) had to be installed as with any other third-party software product. However, over time, VMware and other virtualization vendors came out with versions of their products that could be installed from a USB memory stick or other form of flash memory. It was called ESXi in VMware's case.
Thus the embedded, or at least embeddable, hypervisor was born with rumors throughout 2007 becoming product announcements in September of that year.
There's actually a lot to be said for the embedded hypervisor. Lots of IT environments--especially enterprise ones--do indeed have a mix of operating systems and operating system versions. Given that, there is indeed a lot to be said for the idea that hypervisors just come with the server as a sort of superset to the firmaware, like BIOS, already loaded on every system. Then IT administrators could just configure any guest OSs they want on top.
It's logical. But it's not really playing out that way--at least so far.
After all the initial excitement in late 2007, embedded hypervisors didn't really go anywhere in 2008. Instead, Microsoft's Hyper-V rolled out and KVM found its way into the main Linux kernel as an alternative style of Linux virtualization backed by Red Hat.
Whether or not it makes "sense," in some theoretical, architectural sense, it's no longer clear to me that embedded hypervisors are going to be the path that the industry predominantly follows.
Rather, at the moment, homogeneous environments are tending towards whatever is built into the OS. And enterprises are going to their ISV of choice--sometimes Citrix for XenServer--but far more often VMware for ESX.
At the very least, it now looks as if--for the foreseeable future--IT shops will acquire virtualization, including hypervisors, in a variety of ways that vary as a function of their individual requirements, circumstances, and vendor alignments.
One of the questions related to client computing that I've been exploring of late is whether we're likely to see a mainstream mobile device or devices emerge between a smartphone and an ultra-portable notebook.
My Illuminata colleague Jonathan Eunice and I debated this subject on a video recently--mostly in the context of long battery life, instant on/off mini-notebooks of various sorts. The HP Jornada 820 of the late 1990s is one possible prototype for such a device, suitably updated for a wirelessly connected world. The stillborn is another take.
I'm perhaps more skeptical than my colleague that we'll see the right intersection of technologies, costs, and use cases to support a mainstream mobile--but not pocketable--computer that's not a full notebook but has other attributes that make it compellingly better for people on the go.
(This is the point where someone jumps up and yells "NETBOOKS!" To which my response is that Netbooks are not really a category. Leaving aside for the nonceof their most faddish popularity, Netbooks are really just cheap notebooks. Low price is their distinguishing feature, not battery life or anything else that makes them particularly suited to throwing in a backpack. Even their weight is little different from the best of the ultraportable notebooks.)
Of course, in a sense, we have lots of tweeners today. We have digital cameras, portable gaming consoles such as the Nintendo DS, and e-ink based e-book readers like Amazon's Kindle. But these are all optimized for very specific purposes; they're in no sense general purpose computers or even subsets of computers optimized for mobility.
However, a recent post by ZDNet's Jason Perlow "Forget Kindle DX. How about the ZuneBook?" got me thinking. Might some form of tablet one day be a tweener of choice?
Let me be crystal clear about one point. I'm not talking about tablet PCs as we know them today. They have their adherents but most people find that it's hard to use them for many of the things that PCs are good for (like writing using a keyboard) while simultaneously carrying over notebook baggage such as weight, relatively short battery life, longish boot times, and so forth.
Rather I'm thinking of something that is physically thin, light, easy to read in sunlight, instant on/off, multitouch screen, wirelessly connected using both Wi-Fi and cellular networks, and about the size of an 8.5-inch by 11-inch pad of paper. I imagine a software environment that isn't necessarily general purpose but could be extended to at least some degree. Google Android or Windows Mobile might be possibilities. Think of it as an e-book reader on steroids.
Such a device isn't possible today even if you leave out the question of what it would cost if it could theoretically be built. The display is the real killer. A color, e-paper, multitouch display is a few years out. OLEDs will improve on existing LCDs on several dimensions--notably, in this context, battery life and thickness. However, OLED technology still doesn't get you to the same easy-on-the-eyes-even-in-sunlight point and all-day-plus battery life as e-paper.
But it seems an interesting direction for device makers to explore. Once the foundation technologies are available, it's something that could deliver qualitatively different experiences than either a pocketable smartphone or a notebook with a keyboard. And that's the sort of compelling differentiation that a tweener device will need to make it big.
Intel has slipped out a revised schedule for its next-generation Itanium processor, code-named Tukwila. Again. This time it's into 2010.
Intel released a statement Thursday on the schedule changes. It reads in part:
During final system-level testing, we identified an opportunity to further enhance application scalability best optimized for high-end systems. This will result in a change to the Tukwila shipping schedule to Q1 2010.
In addition to better meeting the needs of our current Itanium customers, we believe this change will allow Tukwila systems a greater opportunity to gain share versus proprietary RISC solutions including Sparc and IBM Power. Tukwila is tracking to 2x performance vs its predecessor chip. This change is about delivering even further application scalability for mission critical workloads.
That may be true. However, the fact remains that this is yet another delay to the program. This will put Tukwila's introduction more than two years after the debut of the current "Montvale" generation--which itself was a delayed and modest speedbump to "Montecito"--and one that Intel barely announced publicly.
Tukwila has had an especially bumpy history. This generation of Itanium processor began life as a chip project code-named Tanglewood and was said to be envisioned as a radical multicore design by the ex-Digital Equipment Alpha engineers who worked on it.
First, Intel changed the code-name to Tukwila after the Tanglewood Music Festival complained. This was back in 2003--to give you an idea of how long this particular project has been weaving its way through development. At that time, it was slated for something in the neighborhood of a 2007 release.
Then the chip apparently went through a variety of significant design changes. It will still be the first Itanium to sport Intel's serial processor communications link (QuickPath Interconnect--QPI) and integrated memory controllers. Those are both major enhancements, but otherwise Tukwila is a more conventional quad-core evolution of current Itanium designs. It will also be manufactured with a 65-nanometer process instead of the denser 45-nanometer process already used by the newest Intel Xeon CPUs. Along the way, the chip's schedule has been publicly pushed back a number of times, now to early 2010.
As a practical matter, delays to Itanium matter less to Intel and the server makers that use it (meaning Hewlett-Packard first and foremost) than in the case of x86 Xeon, where a delay of a few months can have a major revenue impact--vis-a-vis.
Buyers of high-end servers like HP's Superdome and NonStop value vendor relationships, reliability, and a wide range of enterprise-class capabilities far more than they do the last drop of performance. HP has done a good job of things like leveraging its c-Class BladeSystem infrastructure for its Itanium-based Integrity servers and putting together systematic go-to-market programs with partners such as SAP.
Nonetheless, at some point, ongoing delays have to hurt competitiveness--especially given how IBM's Power systems have been hitting on all cylinders the past few years.
We've all heard the rant. With e-books, there's no paper, printing, transportation, and so forth. So why should an e-book still cost $9.99 (typical for Kindle) or even more?
The idea of e-books being cheaper makes a lot of intuitive sense. If everything you physically hold in your hand and everything it took to deliver that physical good to your hand can be converted to a few megabytes worth of electrons, surely the cost of the book must be dramatically lower than a typical hardcover--and the price should reflect that fact.
The problem is that the costs aren't nearly as much lower as you might believe. Here's one breakdown from Money magazine for a hardcover bestseller by way of Scott Laming of BookFinder.com Journal:
Based on a list price of $27.95
$3.55 - Pre-production - This amount covers editors, graphic designers, and the like
$2.83 - Printing - Ink, glue, paper, etc
$2.00 - Marketing - Book tour, NYT Book Review ad, printing and shipping galleys to journalists
$2.80 - Wholesaler - The take of the middlemen who handle distribution for publishers
$4.19 - Author Royalties - A bestseller like (John) Grisham will net about 15% in royalties, lesser known authors get less. Also the author will be paying a slice of this pie piece to his agent, publicist, etc.
This leaves $12.58, Money magazine calls this the profit margin for the retailer, however, when was the last time you saw a bestselling novel sold at its cover price.
One way to look at this is to look at the percentage of the list price that printing represents. That's 10 percent--plus at least a chunk of the wholesaler line item. So let's call it 20 percent in all.
But, as noted, given that books generally sell at a discount off list, I find it more intuitive to look at this the other way. Start at zero and add cost and profit line items. In the example, the typical volume retailer is often making far less than the $12.58 figure would suggest. A 40 percent discount brings it down to only $1.11; hardcover bestsellers are a sort of loss-leader for retailers.
Pre-production. Other things being the same, there's no reason this goes down with an e-book. Arguably it's a bit lower if something is sold solely as an e-book--perhaps a bit less design work and proofing related specifically to the physical nature of a book--but it's actually likely a bit higher if we're talking about having both physical and digital versions--as would be the typical case today.
Now some, such as blogger Aaronchua, argue that this just shows that traditional publishers "have not changed their operating structure to leverage on the new economics brought on by the Web."
However, as noted in discussion of the prior piece, these functions are not just costly overhead. "After the book's in the publishing house, it is usually reviewed by like up to 5 editors who give their opinion before it's handed over to one editor who they believe is the best for it. You then get an editor, who through multiple revisions helps the author get the book to a better standard and quite often to more closely resemble the author's original idea."
Now, perhaps the whole process is too heavyweight. But how many of us have read a book and thought to ourselves that "it really needed an editor." Most of us, I'd say. You can skimp here but the results often show it.
Marketing. Again, there's no inherent reason why the dollar amount changes. Many aspects of the marketing process probably change if we posit an all-digital world. But social media and other forms of viral promotion are not a panacea that magically replaces book tours, professional publicity work, and so forth. Sure, you don't need to do any of this but you don't need to sell many books either.
Profits. Let's be generous and cap the costs there. In practice, there are going to be some costs related to digital delivery that someone is going to have to shoulder along the line, but ignore that.
If we're going to sell the book for $9.99 net of any discounts, that leaves us with $4.44 to split between the retailer and the author. Compare that to $4.19 for the author in the printed book example and something between about $1 and $10 for the retailer. So a $9.99 e-book in this example leaves less money after costs than the hardcover does.
This may be made up by higher volumes to some degree. However, as Tim O'Reilly noted in a 2007 post:
I think that the idea that there's sufficient unmet demand to justify radical price cuts is totally wrongheaded. Unlike music, which is quickly consumed (a song takes 3 to 4 minutes to listen to, and price elasticity does have an impact on whether you try a new song or listen to an old one again), many types of books require a substantial time commitment, and having more books available more cheaply doesn't mean any more books read. Regular readers already often have huge piles of unread books, as we end up buying more than we have time for. Time, not price, is the limiting factor.
The economics of selling back-catalogs may also be different. Pre-production costs are, almost by definition, fixed. They're incurred before the first book can go out the door. Marketing is also primarily a fixed advance cost. (Although the size of budgets will be tuned to expected sales--unknown authors with no track record shouldn't expect massive advertising and publicity campaigns.)
So once those costs have been incurred--and hopefully recouped--in more or less the usual way through the first couple of years of a book's life, it may make sense to offer a discounted digital edition given that it doesn't incur the cost overhead associated with lower volume "long tail" sales.
(In principle, you could argue that the same logic applies to the pricing of digital editions at any time in a book's lifecycle. However, in practice, a $5 e-book of a new release would cannibalize the more profitable print edition.)
I know this post went into a lot of detail, but when you're talking about business models and pricing, it is important to actually run the numbers. One can dispute fundamental assumptions behind those numbers of course, but at least they give a starting point.
In this case, they show that--if you want the same level of professional preparation and promotion associated with a typical printed book--the $9.99 e-book price that a lot of people grumble about is probably pretty near the floor.
There's a bit of an anti-Netbooks meme making the rounds in blogs and on Twitter and the expected push-back from their fans. From where I sit, this is fueled partially by the conflating of product and product category, partially by competitive sniping, and partially by genuine consumer confusion. Let me try to tease those threads apart.
I've been skeptical from pretty much the beginning that there was a bright line distinction between Netbooks and other inexpensive, small form-factor notebooks. And it's this lack of a truly standalone category that analyst Michael Gartenberg is writing about in his provocatively titled "Netbooks R.I.P."
"What's in a name?" Shakespeare asked, adding "a rose by any other name would smell as sweet." While some perceive the netbook as a new product category -- a class of device that's never existed -- I would have to beg to differ. A netbook is merely a laptop with the pivotal axis based on price first and foremost... Sure, my price-oriented definition might sound heretical to those who view the netbook as an ode to cloud computing, ubiquitous usage scenarios, and freedom from Microsoft OS tyranny, but that's not how the market has shaped out.
The current generation of Netbooks tends to have certain defining characteristics--specifically Intel Atom processors and the Windows XP (or Linux). But, as Gartenberg notes, a 7-inch screen also used to be a defining characteristic. Now many Netbooks come with 10-inch screens. Come Windows 7 and future processor generations from Intel (and AMD), I expect any clear distinctions that exist today to rapidly blur.
That's not to say that analysts and product managers won't create a bucket for small, price-focused notebooks. They may call that bucket "Netbooks." They may call it "Value Ultraportables." They may call it "Fred."
IT industry people like to chop markets into named categories for reasons of their own, even if as a fellow analyst said at a recent meeting: "the average consumer calls everything a laptop anyway."
One reason that the nomenclature fight around Netbooks is more intense than such battles tend to be is that the distinction between Netbooks and other ultra-portable notebooks is also a fault line in a competitive battle between Intel and AMD.
For Intel, Netbooks have been the big product category win for its Atom processor. (If a somewhat serendipitous win. Atom was originally more focused on a new class of "Mobile Internet Devices" (MID), a product category that so far hasn't taken off.) For its part, AMD has focused on an incrementally higher price and processing power point with its Athlon Neo platform (found in the HP dv2).
As a result, it's in Intel's interests to promote Netbooks as something new that is both apart from and incremental to the notebooks that use higher-end (and higher dollar) Intel parts. At the same time, it's in AMD's interest to denigrate Netbooks as underpowered and not real PCs.
Finally, there is a continuing trickle of evidence,, suggesting that consumer satisfaction with Netbooks isn't all that great.
Like James Robertson, this latest report struck me as a bit curious. Many of the people I know with Netbooks are almost excessively fond of them. However, it's fair comment that most of the people I know as also geeks, are attracted to the new and different, and understand what a Netbook class of device can do--and what it can't. It doesn't stretch credulity to imagine less educated consumers taking a $300 notebook home and then being dissatisfied because it's not a general replacement for a $1,000 notebook.
Highly portable notebooks without the road warrior premiums historically associated with portability are a great advance for consumers. But I'm also excited about the devices that new screen technologies and widespread wireless connectivity could enable. The possibilities in this space are great. Netbooks are just a flavor of notebook.
I had planned to leave the Google Chrome OS discussion to others. It's not that I don't have strong opinions about it but with a commentariat noise level approaching the Michael Jackson ruckus of Tuesday, I figured I'd try to wrap up a client project instead. I did so and I've been getting questions all day, so I thought it would be useful to put down my thoughts in a systematic way rather than answering every query ad hoc.
Let me start out by making what is probably a controversial statement. I don't see this as a big deal. Microsoft is not now radioactive. The Force has not been disrupted. The computer industry does not look different than it did yesterday.
Look. Just about everyone has been assuming that Google was going to bring the Google Android operating system that it developed primarily for smartphones to low-end notebooks. While Chrome OS is different from Android, it's conceptually pretty much the same thing--an open-source operating system built atop a Linux kernel.
So now Google has pre-announced that it's going to do basically what everyone figured it was going to do. Sorry, but that doesn't make me want to run through the streets shouting and hollering.
This is, in many respects, just another Linux distribution. And Linux has (speaking charitably) not had the impact on the general-purpose PC market that its supporters once hoped it would. Sure, enthusiasts load Linux onto PCs and it can work quite well, but even at an open-source developer conference you'll often see far more Macs than PCs running Linux. I can't say that I understand why Chrome OS would succeed where Ubuntu has, if not failed, largely played to a niche.
It's Google we're talking about here to be sure. To which I say that Google has had plenty of failures: Orkut, Google Video, Knol, and Google Base anyone?
Fundamentally, I'm skeptical that anyone is in a position to seriously displace Microsoft and Apple from effective ownership of the general-purpose desktop and notebook space. There's so much ecosystem, most of all software ecosystem, in place that a new entrant would have to offer just overwhelming advantage. Which Linux didn't and doesn't.
There's a story here but it's not about displacing Microsoft.
Rather, I see Chrome OS as reflecting a change in the client and the way we access applications. To the degree that Chrome OS further illuminates and, by doing so, accelerates such change it may indeed be important in its own right. However, this is largely a change that's happening with or without Google--and certainly with or without anything Google does with respect to client operating systems.
And it's this macro-trend that's the real threat to Microsoft, not Chrome OS. Microsoft's franchise is built in no small part on having become the de facto standard API for programs running on another de facto standard that we colloquially call the PC. That franchise may be hard to crack (although Apple has had a degree of success) but that franchise doesn't necessarily carry over to new areas where far less software is locally installed and therefore a "standard API" becomes much less important.
The Linux desktop (whether Chrome OS, Ubuntu, or whatever)and it . That's the far bigger threat to Microsoft. Not that it won't be able to defend its existing franchise but that it will be cut off from extending that franchise into computing that happens over the Net rather than locally.
We've been hearing a lot about thinner client devices of late. Netbooks are a hot topic,. I've wondered if there might not be a role for a sort of . And , pitched for a browser-centric world, had the digerati all in a flutter a few weeks back.
A lot of this activity reflects a general move away from software that is locally installed and run on a traditional PC to software and services housed on servers out on the network--in the cloud, to use the lingo du jour. It's enabled in no small part by increasingly pervasive networks including wireless ones of various kinds.
However, although cloud computing tracks improvements in networks, it doesn't necessarily sync up so cleanly with the parallel improvements going on in computers themselves. As a commenter put it in a recent post of mine: "The thing that I don't understand about the move to "cloud-based services" is that it seems at odds with Moore's Law. Specifically, devices are going to have more & more processing power, disk space & memory - why would you want to offload processing to the cloud?"
This is a deceptively deep comment and one that touches a lot of basic architectural questions about how we will run software and where we will run it.
One thought is that we're not really running counter to Moore's Law. Rather, we're moving the increased number of transistors that Moore's Law gives us from the client to the server. We're making clients thinner (and therefore more portable, cooler, and so forth) and the servers fatter.
There's some truth in that with mobile phones perhaps offering the clearest illustration.
But, for more notebook-like clients there's a lot of processor and graphics horsepower on the local computer that's going to waste much of the time. And, in any case, telecommunications infrastructure places hard limits on bandwidth for a given time of place, but we can dial up and down our local compute horsepower by selecting devices with different characteristics. So it makes more sense to favor local processing much of the time.
In fact, the fundamental thing that thinner clients and cloud computing tackle isn't really the movement of computing off the client but rather the movement of "state" off the client--which is to say data, applications, and customizations specific to a given user.
As a practical matter, most clients still store some amount of state. In the days of old, terminals didn't store anything locally. Sun's Sun Ray line comes closest to replicating this experience in modern thin clients. However, even browsers store cookies and can be configured with extensions and plug-ins that will vary from one installation to the next.
And, for most purposes, this is probably a reasonable enough state of affairs. Our personal devices are personal anyway; we just want to get away from having to load and manage custom software for each individual task that we want to do. Shared, public clients are a different matter, of course. However, in this case, a lowest-common-denominator software load (such as a browser) is typically sufficient.
There is clearly a lot of work left to do and battles, both technical and political, left to fight to arrive at the best architectural models and programming practices for this new generation of client-server computing. For example, do "rich Internet applications" live in the browser a la Adobe's AIR a better approach? Where do .NET and Java fit in?or is a separate framework such as
These (and many others) are not small questions. Application writers need to understand at a very granular level the environment for which they're writing. And there is very much a tension between richness of the client experience and the degree to which we can standardize and simplify that client.
I started following and writing about topics like Amazon Web Services and mashups even before they were corralled under the "cloud computing" moniker. But today, cloud computing is one of the hottest topics in IT.
Much of what I write about the cloud drills down on particular aspects or is a reaction to some vendor's announcement. Here I'm going to take a different approach and take a broader look at where things stand today and some of the challenges ahead.
1. Let's get one thing out of the way first. Cloud computing is real. Yes, there's a lot of hype and a lot of "cloud-washing" (applying the cloud term to only peripherally-related things). But cloud computing legitimately refers to a convergence of technologies and trends that are starting to make IT infrastructures and applications more dynamic, more modular, and more network-centric.
2. The industry has reached a rough consensus on a basic taxonomy for public clouds. We have infrastructure as a service (e.g. Amazon Web Services), platform as a service (Microsoft's Azure), and software as a service (Salesforce.com). People may quibble about some of the details and about how to characterize standalone Web services and such but IaaS, PaaS, and SaaS have developed into a convenient shorthand for describing the basic levels of abstraction for network-based computing.
3. Private clouds exist and will continue to exist. I'm not a huge fan of the term, but many enterprises simultaneously want to take advantage of the technologies and approaches associated with public clouds while continuing to operate their own IT infrastructure (or, at least, to maintain dedicated hardware at a third-party provider). Some of this is doubtless "server hugging" and some is giving IT-as-usual a trendy new name. However, there are lots of reasons why enterprises can't just move to a multi-tenant public cloud provider and it's not even clear that it makes economic sense for many to do so.
4. Security and compliance are high on the list of those reasons. I often see such concerns essentially trivialized as a matter of attaining a comfort level or a level of knowledge--sort of an enterprise version of consumer worries about the safety of online banking. However, right-to-audit clauses can be satisfied in a cloud computing environment., we're now getting into very real and very thorny questions such as how
5. Closely related are legal matters. I hear a lot of generalized concern that the requirements for law enforcement to obtain data from a service provider may well be, at least in practice, lower than those needed to obtain a warrant for a company's own servers. Furthermore, we've already seen a case where the FBI confiscated servers from a hosting provider above and beyond those related to the specific company under investigation. Borders, especially national ones, also carry--not always well understood--legal implications.
6.Nick Carr's The Big Switch argued that computing is on a similar trajectory to what we saw with electrical power generation and distribution. If so, that would make cloud computing a fundamentally disruptive economic model rather than a mostly gradual shift toward software being delivered as a service and IT being incrementally outsourced to larger IT organizations. However, so far, there is scant evidence that, once you reach the size of industrialized data center operations (call it a couple of data centers to take care of redundancy), the operational economics associated with an order of magnitude greater scale are compelling. Specialization, such as to meet industry-specific compliance and regulatory requirements, will also tend to mitigate cloud computing concentration.
7. Data portability is a must. Interoperability less so. Although data portability isn't a panacea--even if you can extract your information in a documented format that doesn't mean you can transparently make use of it somewhere else--it's a base-level requirement. Interoperability is trickier. We're seeing some standardization activity at the IaaS level through a combination of de facto standards, consortia, and third-party brokers that translate among services. However, as we move further up the software stack, there are.
8. Cloud computing and virtualization intersect in interesting ways, but they're not the same thing. The flexibility and mobility provided by server virtualization is a great match for cloud platforms in general. And certain types of cloud computing largely define themselves in terms of the virtual machine containers that virtualization creates. However, companies such as Google have demonstrated that large-scale distributed infrastructures don't require server virtualization; they architect their infrastructures using other techniques and provide higher-level abstractions and services to users.
9. Location-based applications will reach their potential through cloud computing. People have been talking about the potential of apps that understand place almost since cell phones went mainstream. However, it's the intersection of more precise sensors on the client (GPS augmenting cell signal triangulation) and easily-consumable cloud-based applications that can mash up that data with geographical databases and the data from other users of a service that are moving apps about "place" into the mainstream.
10. The cloud will change the client. There often seems to be an implicit assumption that, over time, computing moves into the cloud and mobile devices become interchangeable display and input devices.. Copies of our devices' "state," whether data or personal customizations, will indeed migrate into the network. However, both user experience and the reality of sometimes-connected networks suggest that there's a lot of reason to push many computing tasks and working data sets out to the client device. The client will change but it won't become just a portable version of a "dumb tube."
SAN FRANCISCO--The broad outline of Intel CEO Paul Otellini's keynote speech at the Intel Developer Forum on Tuesday was largely familiar. A single Intel Architecture (IA--which is to say x86) spanning servers in the data center to electronics embedded in a television.
This is a self-serving argument coming from Intel. After all, Intel already holds commanding share throughout much of the traditional PC and server space. Translating that success into newer and developing areas of the market where Intel has not historically played--or where, in many cases, the market has not even historically existed--would be a huge win.
But Intel argues that it's not purely a matter of its own interests. Rather, developers and, ultimately, end users benefit from an architecture spanning the small to the large because it lets them leverage common tools and other software.
In the past, one of Intel's proof points for this claim was to demonstrate issues associated with browsing Web sites on smartphones and other devices running non-IA processors. However, such an argument wouldn't be very convincing today in the light of the generally high-fidelity browsing experience offered by products like the iPhone despite the fact that they don't use IA-architecture processors.
Intel even undermines its own argument for commonality when it admits--as Otellini did in his keynote speech--that "handhelds have to rethink the user experience," a comment followed by a demo of a prototype interface running on Moblin. Moblin is an open-source project focused on building a Linux-based platform optimized for the next generation of mobile devices.
Commonality as a benefit and principle is hard to argue against in the abstract. But handhelds differ in many ways from PCs. User experience, given differences in screen size and the way users interact with devices that don't have a full-size keyboard, is one obvious area. However, optimizations around power usage, performance, and component integration are also much different.
In short, software that runs across a wide range of device form factors and types will hardly be common across that range even if the underlying processor architecture is. At the same time, many of the software technologies visible to both developers and users--including Flash, browsers, and Linux--increasingly span a range of processor architectures.
None of this should be taken to suggest that Intel's Atom--the processor family that's spearheading the company's push into Netbooks, handhelds, and consumer electronics--won't succeed. Perhaps as Otellini suggested, in five years, Intel may indeed sell more system-on-a-chip (SoC) processors based on its Atom processor than traditional microprocessors.
However, to the degree that Intel succeeds in this area of the market, it won't primarily be because Atom is x86. It will be because Atom beats out its competitors on metrics such as power efficiency, cost, size, and the ability of Intel partners to leverage it for their own custom designs.
A good software development framework on Atom matters too and building from an IA foundation will help there. But ultimately it's about the chip, not the architecture.
NEW YORK--It was a larger and cheerier crowd that attended this year's Red Hat's analyst day at the New York Stock Exchange on Tuesday.
That shouldn't be surprising. At last year's meeting on October 7, Red Hat management had the dubious honor of ringing the closing bell on a day that saw the Dow Jones Industrial Average drop over 500 points.
This meeting took place in a time of what's probably best described as cautious optimism about the state of the economy. And in the context of Red Hat financial results that have continued to show growth at a time when so many companies in IT industry and elsewhere have not.
For the quarter ending August 31, its profit jumped 37 percent relative to the year-ago quarter, besting analyst estimates.
The day included a fair bit of discussion related to financial minutiae, as you'd expect for an event pitched primarily for financial analysts. However, it also included an overview of Red Hat's strategy and its technical direction. Here are a few things that caught my eye.
Jim Whitehurst, Red Hat's CEO, spent a fair bit of time talking about what boils down to fine-tuning of its go-to-market execution such as:
The message I took away from this is that Whitehurst isn't looking to change Red Hat's direction in any major way but sees a fair number of areas where more focused execution could lead to financial improvements. Later in the day, we also heard that Red Hat is taking a more systematic approach to which products it allocates development dollars for work such as internationalization.
For his part, Paul Cormier, executive vice president of products and technologies, reiterated Red Hat's belief that virtualization (which should be taken as hypervisor in this context) belongs in the operating system. This argument has been in evidence for a while as my fellow analyst Stephen O'Grady discussed after last year's event.
It stands in stark contrast to VMware's desire to make the operating system irrelevant. Or, to put it another way, VMware's ambition to make the VMware ESX and ESXi hypervisors the model for a new type of operating system. This is too fraught a debate to tackle here; I largely agree with Stephen's take in his post.
However, one of the interesting outcomes of this battle is that Red Hat has been cozying up to Microsoft, the other big gun on the "hypervisors belong in the OS" side. This includes Red Hat's announcement Wednesday "that customers can now deploy fully supported virtualization environments that combine Microsoft Windows Server and Red Hat Enterprise Linux."
This sort of interoperability is certainly a customer desire and both Red Hat and Microsoft can legitimately present it in those terms without anyone smirking. However, the enemy of my enemy is also my friend, at least up to a point.
I also took note that Red Hat finally seems to be making some progress on the management front.
The product in question is RHEV Manager (RHEV-M); it's covered in detail in this video from the Red Hat Summit in September and is currently being tested by customers.
One reason I think it's important is that Red Hat apparently, if belatedly, recognizes that it is. CTO Brian Stevens admitted that RHEV-M "has been a huge missing ingredient."
The one customer speaker at the analyst day was Dave Costakos of Qualcomm and he focused on his company's experiences with testing RHEL-based virtualization and the associated RHEV Manager which he describes as "hits the mark."
I caught up with Dave at a break to get a bit more detail. He told me that they wanted a Web-based interface, which RHEV Manager has. He also liked the integration with Active Directory and other directory systems, and the role-based access controls. He said that it could perform the provisioning operations that Qualcomm requires and otherwise meets their needs.
Management has historically been a relatively weak part of Red Hat's offering that was mostly focused on updating packages. This is really a reflection of the broader Linux and open-source ecosystem in general. Projects like Nagios and, more recently, GroundWork notwithstanding, management doesn't play well to the strengths of open source. It touches too many parts of an IT infrastructure and requires too much cooperative work with the vendors making the things that need to be managed.
It's reasonable to ask whether Red Hat is too late to win big with RHEV Manager and its associated KVM-based virtualization play. But it had to attack management from some angle unless it was prepared to just cede that area of differentiation and potential point of control to system makers and others.
Finally, no technology discussion today would be complete without at least a mention of cloud computing. Brian Stevens jokingly called it a "shiny thing that people are looking at how to monetize."
The cloud discussion covered several angles, not least of which was standardization efforts such as Deltacloud. Like most other standardization efforts, this focuses on what is often called Infrastructure-as-a-Service; Amazon EC2 and S3 are perhaps the best known examples. Stevens admitted that it's going to be much harder to define a standard set of higher-level services (platform as a service in the vein of Microsoft Azure) that are portable.
Red Hat's distinctive play in the infrastructure cloud essentially circles back to its approach to virtualization. In cloud infrastructure as imagined by Red Hat, the operating system matters in important ways.
That's because applications matter; indeed, applications are ultimately what matter most. And in on-premise computing, one of Red Hat's greatest values and differentiators is the vast number of certified applications that it runs. This certification matters to users because, if they encounter a problem, it means that they can call the application vendor to get support. Otherwise, they'd get a "sorry, that's not a supported configuration."
One can argue whether the software layering of which the historical operating system is a part is the most appropriate choice for cloud computing. Fellow CNET blogger James Urquhartin a pair of recent posts.
However, whether it's the way it should be or not, it is for now. And for Red Hat to be able to enable users to carry the certification of applications into a cloud model is a significant differentiation.
The debate over single-function server appliances versus general-purpose servers is a long-standing one.
Appliances first came onto the scene in the late 1990s during the first Internet boom. They focused on a particular task, such as Web serving, and were designed to be ready to install with minimum muss, fuss, or skill. This assembly line approach to server farms was to be the secret sauce that made possible infinite growth without infinite IT staff.
Cobalt Networks was perhaps the best known and most sophisticated of the companies to offer appliances. Sun Microsystems later acquired Cobalt and then failed to successfully integrate it. This arguably presaged the mixed history of subsequent Sun acquisitions in general. But it also highlighted how server appliances remained much more of a niche than envisioned by their more vocal proponents.
The knocks on server appliances then and now haven't changed much. EMC Global Marketing CTO Chuck Hollis lays out some of the negatives in a recent post.
It's not the first big appliance that causes the problem, it's when you have a fleet of them that you realize you've traded one class of headache for another.
None of them are built the same way. None of them manage the same way. None of them are supported the same way. None of them know how to work together in a cooperative fashion, and so on.
Want standardization at the different layers of an architectural stack? Sorry.
Want a pool of resources that can flow and flex to support whatever workload is at hand? Sorry, can't do.
Want to use the latest-greatest infrastructure technology from (choose your favorite vendor here)? Sorry about that as well.
Hollis sums up his case as follows: "You can see the nature of the trade-off, can't you? It's basically trading immediate gratification for a specific project versus creating long-term value through IT infrastructure."
EMC's interest in this debate is twofold.
The first is to push the notion of virtual appliances, pre-built virtual machine images that can be deployed on a virtual infrastructure. The idea is that an IT department can buy or build an encapsulated stack of software for a particular function and then deploy it across a server infrastructure of their own choosing. Given that EMC owns about 90 percent of VMware, its interest in promoting additional reasons to deploy server virtualization is obvious.
Virtual appliances also have a close affinity to cloud-computing infrastructure as a service. Amazon Machine Images (AMI) are a form of virtual appliance. And EMC is moving in this direction as well with Atmos.
The second reason that we see Chuck Hollis pushing back on the appliance concept is that we're seeing other large and powerful vendors, his competitors, promoting it. As opposed to the appliances of the Internet boom that mostly focused on network functions, this round is also, or even primarily, about heavy-duty business applications.
Oracle's Exadata is perhaps the canonical example of today's "Big Appliance," as Hollis phrases it. However, IBM has its own take on deploying and operating complex workloads such as business analytics. These may not be cookie-cutter appliances like a firewall or a Web server appliance. The tasks in question are too complex for that.
But they still bring together hardware and software from a single vendor and bundle them together. The marketing literature legitimately couches this integration in terms of customer benefits such as optimization. But such bundles also, and certainly not incidentally, increase a vendor's footprint and reduce the opportunity for others to capture a slice of the pie.
Server virtualization means something fairly specific. Storage virtualization is a bit more diffuse. But it's I/O virtualization that really covers a lot of ground.
At a high level, virtualization means turning physical resources into logical ones. It's a layer of abstraction. In this sense, it's something that the IT industry has been doing for essentially forever. For example, when you write a file to disk, you're taking advantage of many software and hardware abstractions such as the operating system's file system and logical block addressing in the disk controller. Collectively, each of these virtualization levels simplify how what's above interacts with what's below.
I/O virtualization brings these principles to the edge of the network. Its general goal is to eliminate the inflexible physical association between specific network interface controllers (NICs) and host bus adapters (HBAs) and specific servers. As a practical matter in a modern data center, this usually comes down to virtualizing Gigabit Ethernet (and 10 GbE to come) and Fibre Channel links.
Virtualizing these resources brings some nice benefits. Physical resources can be carved up and allocated to servers based on what they need to run a particular workload. This becomes especially important when the servers themselves are virtualized. I/O virtualization can also decouple network and storage administration from server administration--tasks that are often performed by different people. For example, IP addresses and World Wide Names (a unique identifier for storage targets) can be pre-allocated to a pool of servers.
That's I/O virtualization conceptually. Vendors are approaching from a lot of different directions.
For starters, like many things, I/O virtualization has its roots in the mainframe. From virtual networking within servers to channelized I/O without, many aspects of I/O virtualization first appeared in what is now IBM's System z from whence it made its way into other forms of "Big Iron" from IBM and others. Thus, many servers today have various forms of virtual networking within the box whereby virtual machines communicate with each other using internal high-performance connections that appear as network links to software.
However, I/O virtualization in the distributed systems sense first arrived in blade server designs. Egenera was the pioneer here. HP's Virtual Connect for its c-Class BladeSystem and IBM Open Fabric for its BladeCenter are more recent and more widely sold examples. And virtualization, including I/O virtualization, lies at the heart of Cisco's Unified Computing System (UCS).
Blade architectures incorporate third-party switches and other products to various degrees. However, they're largely an integrated technology stack from a single vendor. Indeed, this integration has arguably come to be seen as one of the virtues of blades. In this sense, they can be thought of as a distributed system analog to large-scale SMP.
A new crop of products in a similar vein aren't tied to a single vendor's servers.
Aprius, Virtensys, and NextIO are each taking slightly different angles, but all are essentially bringing PCI Express out of the server to an external chassis where the NICs and HBAs then reside. These cards can then be sliced up in software and divvied up among the connected servers. Xsigo is another company taking a comparable approach but using InfiniBand-based technology rather than PCIe.
Whatever the technology specifics, the basic idea is to create a virtualized pool of I/O resources that can be allocated (and moved around) based on what an individual server requires to run a given workload most efficiently.
There's a final interesting twist to I/O virtualization. And that's access to storage over a network connection. While network-attached file servers are suitable for many tasks, heavy-duty production applications often need the typically higher performance provided by so-called block-mode access. For more than a decade, this has tended to translate into storage subsystems consisting of disk arrays connected to servers by a dedicated Fibre Channel-based storage area network (SAN).
However, with the advent of 10 GbE networks and associated enhancements to Ethernet protocols, we're starting to see interest in the idea of a "unified fabric"--a single infrastructure to handle both networking and storage traffic. One of the key technology components here is a protocol called Fibre Channel over Ethernet (FCoE) that allows block-mode storage access originally intended for Fibre Channel networks to traverse 10 GbE instead.
There's more to unified fabrics than that involving alternate protocols such as iSCSI and various acceleration technologies but for our purposes here, I'll use FCoE as a blanket term.
So what does FCoE have to do with I/O virtualization? After all, an adapter card optimized for FCoE can be virtualized alongside other NICs and HBAs. So, at first glance, you might think that FCoE and I/O virtualization were simply complementary.
At one level, you'd be right. Aprius, for example, advertises that it provides "virtualized and shared access to data and storage network resources (Ethernet, CEE, iSCSI, FCoE, network accelerators) across an entire rack of servers, utilizing the ubiquitous PCI Express (PCIe) bus found in every server."
However, considered more broadly, I/O virtualization and FCoE solve many of the same problems--that of connecting servers to different types of networks without a lot of cards and cables associated with each individual server.
Adapters that connect to converged networks will themselves converge to card designs that can handle a wide range of both networking and storage traffic. Furthermore, if Ethernet's history is any indication, prices are likely to drop significantly over time; this would make finely allocating networking resources among servers less critical.
To the degree that each server can get a relatively inexpensive adapter that can handle multiple tasks, the rationale of bringing PCIe out to an external I/O pool is, at the least, much reduced. There are still rationales for virtualizing I/O in some form--especially in an integrated environment such as blades. Cisco, for example, puts both FCoE and virtualization front-and-center with its Unified Computing System. But narrow justifications for I/O virtualization such as reducing the number of I/O cards required are significantly weakened by FCoE.
At the end, FCoE may not be I/O virtualization as such but it's closely related in function if not in form.
There are many different technology adoption models out there. Geoffrey Moore's curve--the one that uses terms such as "Early Adopters" and "Late Majority"--is a common one. And different technologies end up getting adopted at strikingly different rates. This fascinating chart from The New York Times shows how the telephone made its way into U.S. homes only over a span of many decades while the VCR went from rare to commonplace over about a single 10-year stretch.
In general, new technologies are permeating the market faster than ever before. Still, the length of time it takes for even an ultimately successful innovation to become commercially important is routinely underestimated by lots of industry watchers. I've been guilty of this myself.
One issue is that many of us in the IT ecosystem are early adopters by nature. We're enthusiastic about the new coolness for its own sake, not just for what it's capable of. By contrast, the ultimate buyers are often more conservative and mostly want technologies that have already proven themselves. It's a potential that we as analysts try to guard against, in part by speaking with different types of end users.
Another issue is that new technologies are often more interesting in combination with other pieces than they are in isolation. To use the old cliche, the whole is greater than the sum of the parts. However, the corollary is that it takes more work and more time to bring that combination into being than it does just one component. Frederick Brooks discussed this reality in the context of bringing the IBM System 360 to market in his widely read "The Mythical Man-Month".
I bring up this topic because of something that caught my eye in a Web 2.0 Summit presentation by Mary Meeker of Morgan Stanley. She devoted a large chunk of her presentation to mobile trends, beginning with a slide that stated "Mobile = Incremental Driver of Internet User / Usage Growth." She went on to say that "Mobile Internet usage is and will be bigger than most think."
This computing growth includes Apple. She stated that "Near term, Apple is driving the platform change to mobile computing. Its mobile ecosystem (iPhone + iTouch + iTunes + accessories + services market share / impact should surprise on the upside for at least the next 1-2 years." However it also includes a rich set of other devices including automobile electronics and home entertainment devices. In some respects, this is the "Internet of things" as Sun Microsystems CTO Greg Papadopoulos has called it. (Although as Richard MacManus over at ReadWriteWeb suggests, the full Internet of things, including RFID sensors and the like, is something more expansive.)
The "secret sauce" in this growth? Location-based services. Meeker quoted Mathew Honan, of Wired magazine, who wrote: "Simply put, location changes everything. This one input - our coordinates - has the potential to change all the outputs. Where we shop, who we talk to, what we read, what we search for, where we go - they all change once we merge location and the Web."
What caught my eye about all this was that I remember all the enthusiasm over the imminent arrival of the mobile Web back during the first Internet build-out about a decade ago. Here's a typical press release from a company named Optus in November 2000: "Mobile phone users can locate a close-by restaurant, chemist, bank or cinema now that Cable & Wireless Optus has launched Australia's first range of sophisticated location-based services on its Wireless Application Protocol (WAP) service, Optus Networker."
There were many such claims at the time and many proclamations that "place" was the Next Big Thing.
Ultimately it appears the proclaimers were right. But it took a while. It arguably took the second or third iteration of iPhone for applications that make use of the user's location in smartphones to take off in a big way. And thereby make the promises of press releases of the year 2000 a mainstream reality.
Some of it is just technological maturity of the device and the network. A mobile browser that can access the "real" Web with reasonable fidelity and performance rather than being restricted to a dumbed-down mobile Web turned out to be one major piece.
Key too was a development environment that made it possible for many casual developers to create applications and not just a few working closely with a handset maker.
The vast amounts of data created over a number of years through various types of social media is pretty important as well. We don't mostly find nearby restaurants through formally curated data; we find it through Yelp.
In short, the rich mobile experience isn't about one thing but many. And aligning the pieces always takes time.
11/5/09 and 11/6/09
Multicore processors are here to stay and the number of cores that we'll see packed onto a single chip is only going to increase. That's because Moore's Law is only indirectly about performance; it's directly about increasing the number of transistors. And, for a variety of reasons, turning those transistors into performance today largely depends on cranking up the core count.
There's a downside to this approach though. Programs that consist of a single thread of instructions can only run on a single core. This in turn means that they're not going to get much faster no matter how many cores a chip adds. Running faster means going multi-threaded--splitting up the task and working on the different pieces in parallel. The problem is that programming multi-threaded applications introduces complications that don't exist with single-threading.
These complications and ways to overcome them was the topic of my conversation with James Reinders at the Intel Developers Forum in September. Reinders is the director of marketing and business for Intel's Software Development Products. He's an expert on parallelism and his most recent book covered the C++ extensions for parallelism provided by Intel Threaded Building Blocks.
In part 1 of this discussion we talked about how to think about performance in a parallel programming environment, why such environments give developers headaches, and what can be done about it.
Reinders began by noting that developers fall into roughly two groups when it comes to parallel programming: those who are still concerned about ultimate performance even in a parallel world and those who are just looking for a way to deal with it at all.
The challenge is understanding what we're trying to introduce, how to use parallelism, but with programmer efficiency. Because programmers don't need yet another thing to worry about. There's plenty of those out there.
And we need to be a little more relaxed about the performance. The people who start asking me about efficiency in every last cycle used and such--I characterize them as people we need to talk to more about our high-performance computing-oriented tools that give you full control. And other people are "I don't even know how to approach parallelism." I think there is a different set of ways to talk about the problem.
The problems with this second group comes down to the fact that most programmers are used to dealing with something called "sequential semantics." A detailed description of programming semantics is a complex computer science topic but, at a high level, sequential semantics means more or less what it sounds like it sounds; instructions follow one after another and execute in the order that they are written.
If you store the number "1" in variable A, then store the number "2" in variable B, and then add them together in a third instruction, you can be confident that the answer will be "3." It won't depend on timing vagaries that might have caused the addition to happen before the stores. Most people start out programming sequentially using languages designed for that purpose.
Parallel programming, on the other hand, introduces concepts like data races (the answer is dependent on the timing of other events) and deadlocks (in which two threads are each waiting for the other to complete so that neither ever does). Here's Reinders:
If you've ever managed and got a bunch of people working on a project together, one of the headaches you get is coordinating with each other. What did Fred say to Sally? They're doing things out of order or whatever. Parallel programming can give you that same sort of headache.
The programming terminology you'll hear the compiler people use is "sequential semantics." One of the interesting areas is what can we do if we ensure sequential semantics. We recently acquired a team in Massachusetts who were working for a company called Cilk Arts.
Our hope is that Cilk can do a subset of what Threaded Building Blocks [TBB] can but preserve sequential semantics. We think we can do sequential semantics, do a subset of what TBB does, since we're introducing keywords into the compiler--that has some disadvantages because it's not as portable--but we think we might be able to magically give you sequential semantics and not give up performance. That's a big if.
Now why would we invest in that?
Because there are a lot programmers who have been getting along just fine with sequential programming. But when you tell them to add this or that for parallelism, a big thing that trips them up is that you no longer obey sequential semantics; you have more than one thing running around and you get data races, deadlocks, and it doesn't feel comfortable.
Now some people will argue that you need to do these things to get good performance. We have the feeling that in some cases you don't need to take that big of a leap to get pretty good performance.
And no one's going to criticize your app on a quad core for being only 70 percent efficient.
From there we moved on to data parallelism which focuses on distributing data across processing elements. It contrasts with the task parallelism that we commonly associate with the term parallel programming. Pervasive DataRush is one commercial product based on a data parallelism model. APL, the language with the strange symbols (for those with long memories), is often considered the first data parallel language. There have been a variety of others, often extensions to more conventional languages like C and FORTRAN, but none were widely used.
Data parallelism just takes it one step further. Data parallelism is all about the parallelism in the data. So you're talking about the data when you program.
And once you start talking about the data, the tools underneath can move the data around. Leaving the data management up to the programmer [as with Cilk and TBB] turns out to be a terrific headache. This applies equally to a cluster where they don't share memory or a GPU and a CPU in the same system.
But a language like RapidMind or Ct can address that problem. And CUDA and OpenCL can too [frameworks primarily oriented towards heterogeneous processing that uses graphics cores for computing tasks] but RapidMind and Ct are at a much higher level of abstraction which means that we're betting on the idea that we can attract more developers and give up some efficiency.
Part 2 of our conversation will cover cloud computing, functional and dynamic languages, and what needs to happen with respect to programmer education.
Intel's James Reinders is an expert on parallelism; his most recent book covered the C++ extensions for parallelism provided by Intel Threaded Building Blocks. He's also the Director of Marketing and Business for the company's Software Development Products. of our discussion at the Intel Developers Forum in September we talked about how to think about performance in a parallel programming environment, why such environments give developers headaches, and what can be done about it.
Here, in Part 2, we move on to cloud computing, functional and dynamic languages, and what needs to happen with computer science education.
Few wide-ranging conversations these days would be complete without at least a nod to cloud computing which Reinders views as very much connected to the matter of parallel programming.
Cloud computing is parallel programming. You're solving the same problem. In fact, someone that's good at decomposing a program to run in parallel on a multicore or on a supercomputer... the same thought process is necessary to decompose a problem in cloud computing. What's different in cloud computing is that the cost of a connection or a communication between two different clouds is so high. You really need to get it right. It works best when a little message is sent, does an enormous amount of computing, and gets a little message back.
Data parallelism tends to be very fine-grained.
Task parallelism like we see with Cilk and Threaded Building Blocks is a little bit more coarse.
Cloud computing has to be very very coarse-grained parallelism.
But there's something common about how you have to think about it.
The tools that will let people do cloud computing, express a problem in cloud computing, may eventually just map onto a multicore.
The granularity that Reinders discusses refers to how small a chunk of computing can be, given the cost and latency of communications. Within a single processor, communications bandwidth is high and latencies low, so software can afford to perform a relatively small task and then synchronize the results. (Although moving large amounts of data can still be relatively "expensive" which is why data parallelism can be finer-grained than task parallelism;for further background on data parallelism.)
By contrast, external communication networks have limited bandwidth and are relatively slow--on the order of four or five orders of magnitude slower than communications within a system. Therefore, tasks have to be parceled out in relatively large chunks that, ideally, don't have to be packaged up with a significant amount of local data.
Next up was education. Here, Reinders' basic message was focusing on the theory before diving into the implementation details. I suspect that this highlights one of the key challenges: Parallel programming tends to require a solid grasp of programming theory and doesn't lend itself particularly well to just "hacking around" in the absence of that grounding.
I've been doing a lot in the area of teaching parallelism. What a lot of people think of right away is teach them locks, teach them mutexes [algorithms to prevent the simultaneous use of a common resource], teach about how to create a thread, destroy a thread. That's all wrong. You want to be talking at a higher level. How do you decompose an algorithm? What is synchronization in general? Why does it exist?
Things I would hope undergraduates would learn are parsing theory, DAG representations [a tool used to represent common subexpressions in an optimizing compiler], database schemas, data structures, algorithms. All these are high level, not things like [the programming language] Java. Parallel programming's like that too. You get hands-on touching the synchronization method or whatever but you want to teach the higher level key concepts.
Some people it's going to be more in-tune with their thinking but you try and teach it to everyone.
Given that most of today's languages weren't expressly designed for parallel programming, discussions about parallelism often turn to new programming languages. This means functional languages most of all but can also involve dynamic or scripting languages which generally handle more low-level details under the covers than do Java or C++.
Functional languages don't lend themselves to easy, or easily comprehensible, description. A common shorthand is that "Functional programming is a style of programming that emphasizes the evaluation of expressions, rather than execution of commands." But that probably doesn't help much if you don't already know what it is. As for Wikipedia's entry, Tim Bray--no programming slouch--called it fairly impenetrable. (Perhaps you begin to see the problem.)
A couple of things I'm interested in functionals for. We don't wake up one day and everyone uses. It's sequential semantics again and sequential semantics appeal to people and functional languages don't have them. But some people eat them up.
And they solve amazing problems. You can code things up in them that are much easier to understand than if they are written in a traditional language although they can be cryptic or terse to a lot of programmers.
Erlang [a functional language] has gotten a bit more and more usage. Maybe it is creeping in. It's not going to take over the world overnight but it seems like the one that might stay around. May be talking about it 20 years from now and saying, yeah, Erlang's been around for 25 years. It might be accepted as a language. It may have legs.
But even Java. [Unlike Erlang,] It appealed to people who programmed in C and C++; it didn't challenge them to think differently. And because of the strict typing and stuff it helps [the enterprise developer] to deploy certain types of apps.
Python [a dynamic language] is interesting. It is so popular with a lot of scientists. It's on my short list of things, where if we can figure out where to partner or extend some of the things we're doing, Python's on my short list of languages that we want to help with parallelism. Maybe some of our Ct technology would apply there. We'll see if other people agree with us. Think the concepts we're talking about are pretty portable.
Finally, we concluded our discussion with hardware.Are there opportunities at the hardware and firmware level with memory subsystems or with specific technologies such as transactional memory? Sun Microsystems was very interested in transactional memory in the context of its now canceled "Rock" microprocessor. The basic concept behind transactional memory is to provide an alternative to lock-based synchronization by handling concurrency problems as they occur at a low-level rather than having the programmer protect against them all the time.
The best solutions tend to not be silver bullets so much as incremental. Nehalem [Intel's latest microprocessor generation] in a way probably helped us more than anything in recent memory because we moved to the QuikPath interconnect and moved bandwidths up and latencies down. Larrabee [a many-core Intel microprocessor still under development] may pave the way with some innovations in interconnects. I think there may be some refinements needed. Interconnecting the processors is a classic supercomputer issue.
Transactional memory has slammed up against a very tough reality which is that hardware always wants to be finite; software solutions wants to be infinite. Think there's something there.I think the people looking at transactional memory have started to make observations about locks that may end up being useful. It's funny. The mission of transactional memory is to get rid of locks but the more they looked at it the more they understood about how locks behave. There might actually be possibilities to make locks behave better in hardware.
Can we do the hardware a little differently? Not the sexiest thing in the world. But as we move from single-threaded to multi-threaded what complications are we creating things [that the hardware can help with]?
Even if you don't subscribe to the more extreme views of programming and software being in a crisis because of the move to multi-core, we're clearly in a transition. New tools are needed and programmers will have to adapt as well, to at least some degree.
More and more of our computing happens through applications and Web sites out in the network. It's in the "cloud" to use the current trendy lingo.
One consequence is that we're starting to look at our clients differently. That's because they're increasingly a sort of window into the cloud rather than devices that run a lot of application-specific code and store a lot of application-specific data locally. Clients can therefore be "thinner," which is to say loaded with less software and less tailored to the needs and wants of a given user. Resources and customization live out in the network instead.
Even with more conventional operating systems such as Windows, Linux, or OS X, running applications in the network reduces the time spent installing and upgrading applications on our proliferating collection of clients.takes the concept to the next level and essentially reimagines the client OS for a cloud world.
However, the real world is messier and more complicated than "Just run everything in a browser." That's true today and will almost certainly be true to at least some degree next month and next year. Ultimately, this question of how thin clients can become as a practical matter is going to play a big role in how accepted certain models of computing will become.
To illustrate, consider a PC that is today mostly used to go online. There's more than just an OS and a basic browser involved.
There are plug-ins and extensions for the browser. There's probably an IM client; TweetDeck or Seesmic, which may in turn require Adobe's AIR runtime. Third-party media applications such as Apples iTunes are commonplace. Google Earth, Windows Live Writer...This list goes on--and will vary by user--of the applications and components that have to be installed and updated for even a rather bare-bones PC configuration.is a Web-based alternative but most people run a local client. If you use Twitter, there's a good chance you run an application like
And that's before we even broach device drivers or other software that may be required to connect a camera, a microphone, or some other peripheral.
My overarching point here is not that a thinner client model is uninteresting. I strongly believe that it is meant not to replace traditional fat clients but to augment them. Today, I have a notebook that is essentially used only to go online yet I still have all the administration associated with a full-blown PC.
However, the challenge for Google and others is to steer a course that creates an "Internet computer" that is legitimately better in that role than a full-fledged PC while retaining sufficient customization. Application stores may be part of the answer.by making browsers more capable of running applications.
Whatever the specific technical solutions though, the answer will involve a lot of careful thought about balancing simplification and flexibility.
There was legitimate debate at one point whether the style of cloud computing often called Platform-as-a-Service (PaaS) was really going to take off in a big way.
The aim of PaaS is to supply developers with a set of services that they can use to build scalable applications without doing all the underlying grunt work themselves.
Such a platform might automatically add additional capacity in response to increased load. Or it could offer various middleware services, such as databases and application servers. (The National Institute of Standards and Technology has a definition document that I and many others use to help make sure we're all on the same page when it comes to the types and characteristics of cloud computing.)
As is always the case with such things, the lines between what is a platform and what is just infrastructure and what is end-user hosted software blur a bit. But, in the main, platforms are a higher level of abstraction than infrastructure but don't offer something that's directly useful for end-users out of the box.
The questions about PaaS were at least two-fold.
For one thing, while cloud infrastructure has a fairly clean correspondence to physical and virtual infrastructure in a data center and Software-as-a-Service is just hosted software in many respects, PaaS doesn't map especially well to familiar concepts. It's partially related to middleware but also includes forms of background automation that haven't historically existed.
There's also the lock-in concern. Cloud infrastructure services like Amazon EC2 and S3 aren't standardized in a formal way. But their interfaces are straightforward enough that a third-party like RightScale can map them across different providers. Alternatively, others can treat them as effectively a de facto standard and mimic them for their own implementations as Eucalyptus is doing.
But PaaS is more vendor specific and the more layers of specialized function, the more specific it becomes. But this doesn't concern everyone. For example,told me that he generally endorsed the idea of vendors competing on the basis of unique differentiation that users need. As he put it: "I don't see benefit to getting the exact thing from three different providers; then you're just competing on price, not features." And the reality is that moving from one vendor's middleware and other supporting application infrastructure to another's has never been an easy and transparent process.
However, upon reading aby fellow CNET Blog Network writer James Urquhart, it's becoming clear to me that PaaS is an important component of cloud computing.
Microsoft'sat its Professional Developer Conference was, by all appearances, a big hit. I've personally viewed Azure as a major bellwether for PaaS, given the large Microsoft development community. If Azure clicks with .Net developers, it bodes well for the PaaS concept.
James also notes that "Ruby on Rails platform service vendor Heroku reportedly hosts more than 40,000 applications now. At its Dreamforce conference in San Francisco, Salesforce.com mentioned it had approximately 135,000 applications running on its Force.com platform" and that "anecdotal evidence suggests there is a large body of Web application developers running on both the Java and Python instances" of Google App Engine.
Google App Engine's relatively low profile was one reason to be somewhat skeptical of PaaS a year ago. Today, I'm still unconvinced App Engine is living up to some of the early expectations that surrounded it. Nonetheless, in the context of clear PaaS advances elsewhere, it's another data point for an at least moderately popular offering.
To these, I'd add that cloud infrastructure is expanding and morphing into something that looks more like a platform. Newer Amazon services such as Elastic MapReduce and Relational Database Service blur the line between what is infrastructure and what is something more. Arguably, Simple Queue Service already did this from the early days but the new services can increasingly handle the mechanics of scaling an application transparently to a developer.
In fact, given such this apparent demand for more abstraction and higher-level services, I wonder if we're starting to see cloud infrastructure essentially morph into a platform.
The nice thing about standards is that there are so many of them.
This old saw is arguably less true than in years past. Today, for a lot of reasons, there's more pressure to reach agreement on one way to do a certain thing. (Think the HD DVD vs. Blu-ray debacle for an example of what happens when vendors can't agree on a single approach.)
Standards aren't a single thing. Some have been blessed with the appropriate incantations by some official or quasi-official body. Others come from an industry consortium. And still others are "de facto" (or at least began life that way)--the result of a dominant company or just a default way of doing things.
The purist will argue that just being widely used doesn't make something a standard. I agree up to a point and only use the "standard" term in this case for things for are truly ubiquitous. Contrariwise, a rigorous formal ratification process is no guarantee of success.
But some standards do win big and become part of just how IT gets done. Here are some of them.
Like many other successful standards, Ethernet has remained a fixture in local area networks for so many years in part by adapting to many successive waves of technology. First developed in the famous Xerox PARC labs in the mid-1970s, it initially ran over coaxial cable but soon moved to twisted pair cable with the 10 Mbit/second generation. 10 Gbit/second Ethernet is now starting to roll out along with a variety of additions to the specification that make it more suitable as a high-performance unified fabric.
Ethernet's initial success resulted in no small part from coordinated standardization efforts beginning in the IEEE. This helped it beat out alternatives, most notably IBM's Token Ring. Over time, Ethernet's ubiquity and the cost benefits provided by this volume helped it largely stave off server interconnect challengers. InfiniBand has had wins in high-performance computing and certain other clustering applications, but it didn't displace Ethernet as a "server area network" as early promoters had hoped.
PCI, Peripheral Component Interconnect, had its beginnings as an Intel-developed bus for connecting internal cards within systems. The version 1.0 spec came out in 1992. Given the ubiquity of PCI these days, it's easy to forget that it only replaced a plethora of other busses both standardized and proprietary in x86 and, later, large Unix servers based on other processors over the course of nearly a decade.
Nor was the process steady. Although PCI was initially introduced in part to replace the VESA Local Bus for graphics cards--which it eventually did--PCI was itself replaced by AGP (Accelerated Graphics Port) for a time prior to the PCI Express generation.
PCI Express makes for an interesting case study in the marketing of standards. With technology bumping up against the limits of parallel I/O busses like conventional PCI, the Arapahoe Working Group--spearheaded by Intel--started pushing a new serial interconnect standard in about 2001. Arapahoe's success was by no means pre-ordained. AMD's HyperTransport was one alternative among several.
Arapahoe required hardware that was largely different from PCI but it was compatible with PCI's software model in a number of respects. And this was enough to get Arapahoe adopted by the keeper of the PCI standard, the PCI-SIG, and get the SIG's imprimatur on what would now be called PCI Express. And that helped make it the obvious heir to PCI. Names matter. (Here's a more detailed accounting of PCI Express and its history.)
It's easy to forget just how painful it could be, in the years before USB (Universal Serial Bus), to connect external peripherals to a computer system. RS-232, a long-used and successful standard in its own right, was the most common way. It was also a way that could easily devolve into examinations of cable pin-outs, interrupt channels, and memory addresses.
USB was a cooperative effort by a group of large technology vendors who founded a non-profit corporation to manage the specification. Version 1.0 was introduced in 1996. Now up to version 3.0, USB has become the standard way to connect external peripherals to PCs; it's also commonly used on servers for devices such as printers.
There's a spec for wireless USB but, like other standards intended to connect peripherals to computers wirelessly, it's never taken off. The current such "personal area network" getting the most buzz is My WiFi from Intel.
USB's primary competition has been FireWire, Apple's name for IEEE 1394. Unlike USB, it does not need a host computer and is faster than the USB 2.0 generation. However, it didn't catch on widely in the computer industry outside of Apple (which is phasing it out in favor of USB) and video equipment.
TCP/IP refers to the combination of two protocols: Transmission Control Protocol and Internet Protocol. Together, they are among the most important pieces of software underpinning the Internet which transitioned to using TCP/IP in 1983. Work on TCP began under the auspices of the Defense Advanced Research Projects Agency (DARPA) a decade earlier but, along the way, the software stack was re-architected to add IP as the early Internet grew.
Like many of the Internet's building blocks, TCP/IP was firmly entrenched before commercial interests got involved to any significant degree and, indeed, before most of the world at large had any real notion of the Internet's existence. The general public came to know the Internet through the World Wide Web, an outgrowth of Tim Berners-Lee's development of HTML at CERN, in the 1990s. Thus HTML, as well, is a key standard.
At the time that TCP/IP was gaining momentum, the International Organization for Standardization (ISO) spearheaded a large project to standardize networking. The "OSI model" remains the standard way to think about layers of the networking stack. If you talk about a switch being "Layer 4," you're using OSI terminology. But the specific protocols developed to go with the model were never widely used. (TCP/IP largely maps to the layers defined in the OSI model.)
The x86 architecture is perhaps the canonical example of a de facto standard driven primarily by a single vendor: Intel. Microsoft Windows is also in the running, but it was very arguably x86's ubiquity in a segment of the market open to relatively low-cost packaged software that made the rise of Windows possible. Over the past decade, AMD has also driven x86 innovations--most notably 64-bit extensions. However, it was Intel that had the biggest hand in shifting the industry from a structure in which each company did everything from fabricating processors to writing operating systems to developing databases to one in which different companies tend to specialize in one part of the technology ecosystem.
x86 emerged as a dominant chip architecture for a variety of reasons. IBM designed Intel's 8088 into the first important business PC. It got this win and others at a time when the world was rapidly computerizing. And Intel optimized itself to ride key technology trends while divesting itself of businesses, such as memory, as they commoditized.
Finally, here are a few others that could make a list like this one:
Wi-Fi played a big role in making personal computers more mobile--which is why Intel pushed it so hard.
VGA is the computing video standard that finally helped merge a rather splintered landscape and had a good long reign. (The latest video interconnect trend is a shift to HDMI--representing a coming together of computing and consumer electronics standards.)
SCSI was the first storage interconnect to merge in a big way a disparate set of existing connection schemes, both proprietary and more or less standardized. However, storage has remained an area where different standards are used for different purposes. That's changing to a degree with SATA, however, which we now see in both PCs and data centers.
I've been an IT industry analyst for almost 10 years. I've seen many technologies come, go, or fail to even arrive in the first place. However, during that time, a few techs have emerged that play a big part in fundamentally defining how businesses do computing. Most first emerged prior to 2000, but it has been during the past decade that they've truly changed things.
1. x86 processors were already well entrenched in corporate computing by the end of the 1990s, especially in their role as the "(In)tel" part of "Wintel" servers running Windows NT. However, their dominant designer and manufacturer, Intel, was heading in a different direction to handle the inevitable transition to the 64-bit processors and operating systems needed to keep pace with growing memory requirements.
That new direction was Itanium, a clean sheet processor design by Intel and Hewlett-Packard intended to get away from all the legacy features of x86 and--not incidentally--cut the x86-compatible processor makers out of the picture. The Itanium family remains with us but primarily as a processor for high-end HP servers. It was AMD that first added 64-bit extensions to x86 but Intel felt compelled to follow. And it was this backwardly compatible version of x86 that is the mainstream 64-bit server processor, not Itanium.
2. The other big processor story of the decade is multicore. Near the end of 2000, Intel introduced the Pentium 4 processor based on the NetBurst microarchitecture. It was intended to eventually hit about 10GHz. In fact, it never got beyond 4GHz and came to be viewed as the last gasp of performance scaling through frequency.
AMD introduced its first multicore x86 Opteron processors for servers in 2005 which helped it gain market share for a time while Intel made major changes to its development plans and processes. IBM and Sun also aggressively pursued multi-core in their RISC lines. Specialty processors such as Azul's Vega and Tilera's TILE lines went even more radically multicore. In short, frequency is largely dead as a path to higher system performance, which will require a combination of more cores and specialty accelerators working in parallel.
3. When I first met Diane Greene, co-founder and then-CEO of VMware in the fall of 2000, VMware was already selling a product to developers that let them run multiple operating systems on a single workstation. But Diane was in town to pitch me on something new, a pair of new server virtualization products--GSX and ESX Server--that made it possible to consolidate multiple workloads on a single physical server and to provision them more quickly.
The basic concept goes all the way back to IBM's involvement with early developments in time-shared computing in Cambridge, Mass., during the early '60s. And all the RISC/Unix vendors of the time had their own approaches to slicing and dicing servers. However, it was VMware that brought server virtualization to the masses. Its product ran on standard x86 servers and it provided a way to consolidate workloads right at a time when IT purchases were dramatically slowing and anything that could save money was in vogue.
EMC bought VMware in 2003 for $635 million, a figure which it's hard to believe today was widely viewed as an overpayment. Today, server virtualization--an area where VMware remains the 800-lb. gorilla despite Microsoft's entry--continues to fundamentally change the way IT departments think about operating their data centers. Virtualization also underpins much of cloud computing, another major developing trend.
4. Linux and other open source were a big part of the dot-com and service provider build-out of the late 1990s.
But enterprises? Not so much. This 2001 research note had to argue that Linux was, in fact, ready for serious production use. And, whether "ready for the enterprise" is a meaningful question in the abstract, the fact remains that the Linux 2.4 kernel was widely regarded as the first version deserving of that description and it wasn't released until mid-2000. IBM began its big Linux push at about the same time.
Thus, I'd argue that it's been this past decade and not the prior one that has seen Linux and open source truly become a pervasive part of computing. That's not to say that open-source has replaced all other software. But it has heavily influenced how companies do development, engage with user and developer communities, and provide access to their products--even when the software in question is proprietary.
5. My last entry has the greatest overlap with the consumer space. That's not a coincidence, given that mobile devices are a very visible example of what Citrix CEO Mark Templeton calls the "comsumerization of IT."
Mobile devices encompass at least a couple of different things. The most obvious entrant is probably the smartphone--first in the guise of the BlackBerry and more recently the iPhone. We are now at the point where you can carry a bona-fide computer in your pocket, complete with GPS and other sensors, and can run applications that you install. As my colleague Jonathan Eunice has noted, it really is a transformational experience relative to, say, my older Treo. It also represents the reality of the modern smartphone that, for many, it's increasingly about mail, texting, and social media and not, you know, phoning.
However, the smartphone doesn't deserve all the limelight. The noughts have also seen the laptop computer transform. I'm not talking about the form factor so much--although Netbooks have gotten their share of attention. Rather I'm talking about the way that we can use them.
I've had laptops since the 1990s but it wasn't until about 2001 that conferences and other venues started to put up Wi-Fi networks. They worked fitfully (some things haven't changed as much as we might like), but this was the beginning of the connected laptop rather than the merely mobile laptop.
And that's why I see the smartphone and the laptop as part of the same mega-trend. It's not about a particular form factor or usage model. It's about (almost) always being connected to applications that increasingly live largely in the network.
Systems are getting more general-purpose. At least in terms of units sold, servers with two x86 processors dominate the landscape.
And it's more than just servers. For example, on Tuesday Vyatta announced a new series of network appliances, the Vyatta 3500. These systems, like the other appliances that Vyatta sells, combine standard off-the-shelf x86 server hardware with an integrated software subscription that provides networking functions such as Firewall, VPN, IP address management, administration, diagnostics, and so forth. Vyatta pitches its appliance as a much lower-priced alternative to dedicated networking hardware from the likes of Cisco.
We've seen similar examples in the storage arena. Sun has perhaps been the loudest proponent of open storage; its "Thumper" is essentially a standard server with a mechanical design that's been optimized to maximize storage density. However, even beyond such a clear-cut example, storage at companies like HP and IBM has increasingly aligned with the technology and components used in their servers.
One also sees servers, storage, and networking coming together in the form of blades. This is a bit ironic because blades, as initially envisioned, were intended to explicitly disaggregate computing from networks and stored data. But outside of high-performance computing, blades have instead come to be an integration point.
That said, generalization isn't the whole story.
I'm also seeing a lot of interest in what are sometimes called "workload optimized systems" today. The basic idea is straightforward. Different types of workloads perform better on different types of systems. For example, a system that needs to handle high-volume financial transactions won't necessarily look the same as a system that is running financial models instead.
And we're increasingly seeing very high-scale applications that include different workloads of different types. If you want to get technical, you can think of them as composite applications or an interrelated catalog of services associated with a data repository. IBM favors the term "smart applications," which isn't such a mouthful. Whatever you call them, the idea is that one application has different parts as disparate as transaction processing, business analytics, and Web serving. While all of these can be handled by a single type of server, as the scale increases it can make sense to optimize individually for the different workloads.
Thus, we're seeing and will continue to see a blurring of the lines between servers, storage, and networking. The strict separation of these functions is a relatively recent development in the history of information technology and isn't an inherent requirement. At the same time, the idea that a single generic server design could be the right tool for every job would have once seemed an odd assertion. And it's one that I'm seeing increasingly challenged again.
As cloud computing in its various forms increasingly happens rather than just being talked about, I'm starting to hear the idea of a cloud-computing exchange floated. There are certainly things to like about the concept but I don't see it playing out in pure form anytime soon for reasons that I'll get into.
Let's start by defining what I'm talking about when I say "exchange" here. The idea is that different hosted infrastructure providers would put their unused capacity onto a spot market and buyers would bid for it. Different pricing and auction mechanisms are possible but that's not important for this discussion. The key points are: multiple suppliers, interchangeable product, and some sort of market for the capacity.
Spot markets are well-established in many other areas. Commodity exchanges are probably the best known. However, dynamic pricing based on inventory and demand is also widespread across the travel industry for example. Hotel rooms and airplane seats have a lot in common with compute cycles: they expire if they're not used and the incremental cost of filling them is low.
The idea of having a market for compute cycles isn't new. It came up during the height of the P2P computing craze at the beginning of this decade. P2P computing never wholly went away; SETI@home remains an active project. Univa UD (formed by the merger of Univa and United Devices) has had some success in pharma and finance (although it's less client-centric than United Devices' original vision). But P2P, at least in the sense of harvesting excess client compute cycles, never amounted to something truly important, much less a revolution.
We're also starting to see spot markets in cloud computing today. In December, Amazon Web Services announced Spot Instances in which customers bid on unused Amazon EC2 capacity and run those instances for as long as their maximum bid exceeds the current spot price, which changes periodically based on supply and demand. However, this is different in a fundamental way from a market exchange. Spot Instances are a pricing approach by a single company. Amazon could have any number of different pricing approaches, each of which would doubtless appeal to come types of customers more than others. However, whatever the price and the price model, you're still buying compute cycles and other services from one company, with one infrastructure, and one set of programming interfaces.
So what are the issues with a true market exchange?
Interoperability. Compute cycles (on x86 hardware) may be close to a commodity but the way we access them is not. There has been a certain amount of (mostly de facto) standardization of basic cloud infrastructure services; this often boils down to mimicking the way Amazon does things. However, working against standardization efforts is the push upward by infrastructure vendors to become platform suppliers. (Think proprietary database engines and automated scaling.) And these higher-level abstractions are far less standardized. There is a general push toward more interoperability but the sort of transparent movement that exchange markets require looks to be a long way off.
Security and compliance. These continue to be among the bigger concerns I hear CIOs raise about computing on public clouds in general. It's a complex topic that I won't delve into here. But suffice it to say that whatever risk issues are associated with using one or two vetted public cloud vendors multiply if you're talking about purchasing from many vendors through an exchange. Of course, an exchange doesn't necessarily require that you buy from just anyone but once you're starting to talk about having an extensive certification process for each supplier, you're getting pretty far away from the concept of a spot market commodity exchange.
Andi Mann of EMA put it this way: "If we think of this as code sharing for IT, where airlines may hand off a portion of your flight to another carrier, who knows whose cloud plan you will end up on," Mann says. "If you have an agreement with a supplier who is SAS 70-compliant, would the other carrier also be SAS 70-compliant?"
Computing isn't electricity. The whole electric-grid analogy makes for a nice storyline but falls down in important ways. Cloud computing isn't really a commodity the way power from the grid is. Economies of scale would seem to happen at levels that. What's more, cloud computing isn't just cloud processing but this fact often seems to get forgotten. There's usually persistent data as well--and that requires persistent storage. To be sure, some types of computing consume a lot of cycles relative to the number of storage bytes. But for many purposes, moving where computing happens means lots of bandwidth (and time) to move data as well.
Lest I come across as unduly negative, I do believe that it will become easier to manage computing workloads more dynamically and thereby move them from place to place and add capacity to them on the fly. Perhaps the end game is a market exchange--or at least an architecture that supports a market exchange for those wanting one. But that's a huge leap from where we are today and makes some already difficult problems perhaps an order of magnitude more difficult.
Intel finally launched "Tukwila," the latest iteration in its Itanium family of high-end microprocessors., but earlier this week,
Coming on the same day asand the first of an associated line of servers, Tukwila didn't garner as much attention as it might have otherwise. It's also true that today's Itanium is something of a specialty product. But that doesn't make it irrelevant.
incorporate Intel's serial processor communications link (QuickPath Interconnect, or QPI) and integrated memory controllers. These features boost performance considerably and are standard fare for the current generation of server microprocessors. They also mean that the Itanium 9300, as Tukwila is officially known, and the Xeon (x86) processor can, in principle, be supported by the same system design.
In practice, this convergence was a more interesting selling point in the days when Intel envisioned a broader market for Itanium processors. Nonetheless, it will still let Intel and its manufacturing partners take advantage of Xeon design work and dollars for Itanium. The specifics of the chip aside, though, it's not unreasonable to ask whether any of this matters. Given that both AMD and Intel's high-end x86 processors get more capable by the year, why does anyone need Itanium?
Certainly, Itanium's market position today is not the one envisioned by Intel and Hewlett-Packard, when they first started designing the processor in the mid-1990s. They had conceived of it as a 64-bit processor family running Windows and (perhaps) a united Unix that would emerge as the de facto standard, when the time came to move beyond the increasingly restrictive memory limits imposed by 32-bit processors.
The reasons why this didn't happen are numerous, and it would take an extended discussion to give them their due. However, some of the big ones include an overly ambitious concept; delays coupled with bad timing; a focus on instruction-level parallelism, when the world would soon move to more of an emphasis on threads; and AMD's introduction of 64-bit extensions for x86.
Today, by contrast, just one company, HP, accounts for about 85 percent of the market for Itanium processors, with the balance mostly going to several large Japanese computer system vendors. HP uses Itanium in its Integrity line, for which it mostly runs HP-UX (HP's Unix) and NonStop (the descendent of Tandem's fault-tolerant operating system) applications.
One company may not sound like much of a market but, in fact, vendor-specific processors were long the norm in the computer industry and only went by the wayside when x86 matured. And even today, IBM continues to aggressively roll out, and Oracle says it . Each of these cases is a bit different, but the basic point is that it's not outlandish to imagine that a major vendor's product line could support a unique microprocessor.
But why would HP want to, given that this is a company that also has a major x86 product line? In a word: software.
It is likely, perhaps even certain, that if HP could wave a magic wand, and have HP-UX and all its myriad applications run on Xeon tomorrow, it would do so. However, there is no such magic wand.
The closest to such a wand, dynamic binary translation (DBT), works in some limited contexts. IBM uses it for certain Linux applications on Power chips, and Apple used DBT to aid migrating from PowerPC to Intel chips. But, for the most part, IT shops won't or can't use it for the sort of critical applications that run on HP-UX today. Indeed, HP developed its own DBT technology, when it was initially moving applications to Itanium in the first place, under the name "Aries." Few used it.
It took many painful years--the better part of the last decade--for HP and its software partners to re-establish HP-UX's software catalog on Itanium when it migrated off PA-RISC. To start this process anew for Xeon is simply unthinkable.
And even if the features of Xeon have largely achieved parity with Itanium, the same isn't generally true of the platforms as a whole. HP-UX is a mature commercial Unix operating system in the mold of AIX and Solaris. Linux and Windows gain in capability and robustness with each passing year, but they're not yet at the same point. The contrast with NonStop is even more striking. This is, after all, a line of systems that powers about 75 of the 100 largest fund transfer networks around the world.
In short, Integrity brings a lot of money into HP, and it provides customers with capabilities that they can't necessarily get on Xeon-based platforms. And, in any case, HP-UX customers can't necessarily just pick up and move. Migrations take effort and money, and they have a degree of risk, even if the end state is ultimately a more desirable place to be.
In addition to introducing Tukwila, Intel provided additional detail aboutScheduled for about two years hence, it will skip a process generation and launch using 32-nanometer technology. This should bring it more in line with the then-contemporary Xeon processors than the 65nm Tukwila is, relative to today's 45nm Nehalem. (The process generation is significant because it's closely related to the amount of real estate on the chip and therefore to features such as the number of cores and the amount of cache.)
Without providing much in the way of details, Intel also indicated that Poulson will have other architectural enhancements that go beyond simply being a process shrink. "Kittson" will be the generation after that.
Plans can change, of course, and processors can slip. However, barring seismic changes, Intel sketched out a road map for something like a decade's worth of Itanium processors. I don't really expect these Itaniums to set a lot of performance records, but there's no reason to think that they won't be "in the ballpark." It's worth remembering that Sun sold lots of Sparc systems long after they had a "hot" microprocessor. The inertia in applications, skills, and general risk aversion in high-end servers is enormous.
Itanium doesn't matter, when it comes to volume computing. It fought that battle and lost. But it remains an integral component in a major product line at a major systems vendor. And it remains a component that, in a world without magic wands, can't be easily replaced.
It's not exactly news that business applications aren't modernized on a whim. That's because organizations tend to operate on a rule that can be paraphrased something like, "If it's more or less working, leave it be. There's plenty of work that actually needs to be done."
But every now and then I run across an example that emphasizes just how long software can hang around. We're not talking a revision or two of a packaged application but genuinely obsolete technology.
In the course of doing some research, I ran across a 2009 press release titled "Transoft successfully completes migration of legacy auction system for Christie's." The system in question was Christie's main property auction system which one imagines is pretty important to a multinational company that runs art auctions as its business.
The release goes on to describe some technical details of the system: "The Christie's system, running on Data General Eclipse MV hardware and the AOS/VS II operating system, was written in DG COBOL and DG CLI using the DG INFOS II hierarchical database management system. The application supported a character-based user interface via terminal emulation." This caught my eye because I was once a product manager for a variety of Eclipse MVs and also of INFOS II for a time.
Let me translate. This core business application was based on a proprietary operating system and database running on proprietary minicomputers that haven't been manufactured or updated since the mid-1990s and were a legacy product line really for a few years before that. And, oh, Data General no longer exists. We hear about legacy mainframe applications all the time but at least IBM still develops the System z and supports many of the historical products that still run on it.
The release doesn't go into the cost of the migration but does give some sense of the relative complexity of the project. "Christie's undertook extensive system and usability testing of the newly migrated application, after which Transoft assisted Christie's in the roll-out to London, New York, Hong Kong and to offices in a further nine cities across the world." And that's why software is slow to change.
So Christie's in-house application is all spiffy and modern now, right? Well, it did move to a Microsoft Windows Server 2003 environment. But the code itself? Converted to Micro Focus COBOL. And the data migrated to Transoft's U/FOS data management system, an INFOS II clone. Even modernized software often keeps a surprising amount of the old.
One of the dynamics of the server virtualization marketplace that doesn't get the attention it probably should is the question of where the hypervisor "lives" and gets delivered to buyers. Services, such as load balancing and replication, that leverage a virtualized foundation to construct what goes by names like Dynamic IT may be ultimately more important than the foundation's components. However, the choice of hypervisor matters today if only because it serves as a sort of control point for the profitable components above.
Hypervisors get delivered in three different ways.
The first is in the form of software purchased from an independent software vendor. This is the primary VMware model. And, with VMware still the 800-pound gorilla of virtualization, it remains the dominant delivery model today. These ISVs will argue, as does Simon Crosby of Citrix that if "the user expects to deploy a virtualization platform that is entirely guest OS agnostic, using a complete virtual infrastructure platform then a type-1 hypervisor that is OS agnostic (xen.org, Xen Cloud Platform, Citrix XenServer, OracleVM, VMware vSphere) is what they will go for." Former VMware CEO Diane Greene also argued this point with me--vigorously--after I suggested that the operating system might be a more logical virtualization entry point for some users.
The other primary way to acquire virtualization is as part of an operating system. This is the Microsoft model (and therefore the reason that Diane Greene got so irked at me for suggesting it might be a viable virtualization on-ramp). This model also describes Red Hat's approach with KVM.
From the operating system vendor's perspective, you could sum up this approach as "the path of least resistance." You're already buying the operating system anyway, so why not just get core virtualization as part of the package? (Of course, you then have to buy other pieces from the operating system vendor to effectively manage and make use of that virtualized infrastructure.) It strikes me as a powerful acquisition model for homogeneous environments or even environments managed as homogeneous pools. While OS-based virtualization has some catching up to do in areas such as management, I'm not sure limitations and newness, such as those noted by Andi Mann, are obstacles to the same degree they'd be if we were talking about a standalone product. As Crosby also notes:
It's important to realize that for a Linux vendor, KVM significantly simplifies the engineering, testing and packaging of the distro. KVM is a driver in the kernel, whereas Xen, even with paravirt_ops support in the Linux kernel, requires the vendor to pick a particular release of Xen and its tool stack, and then integrate that with a specific kernel.org kernel, and exhaustively test them together - rather than just getting a pre-integrated kernel and hypervisor from kernel.org. So it is entirely reasonable to expect that over time the distros will focus on KVM as a hypervisor. I think KVM is extremely powerful in this context.
The third delivery path is embedded, which can be based on either a standalone hypervisor or one based on a standard operating system kernel. We first saw this idea making the rounds in 2007 and, today, most of the major hypervisors are available in embedded form on various models of servers from the large x86 system makers. It seemed like an appealing idea at the time--amounting to virtualization as a server feature in the vein of a sort of super-BIOS. This was particularly true given all the ongoing work to standardize the way that hypervisors were monitored and managed.
To date, however, embedded hypervisors haven't really taken off. The standalone hypervisors exist in the context of a much broader suite of virtualization software from the ISV and customers find it more natural to acquire all their virtualization software from that source, rather than a system maker. For their part, the operating system vendors are already delivering an OS, so virtualization is just a natural extension of that. Perhaps as virtualization becomes more ubiquitous, embedding it in servers will seem more natural, but it hasn't played out that way to date.
What all this does show, though, is that for all the talk of the "commoditization" of the hypervisor, we're not at that point today. Commoditization implies, among other things, that a product from one source can be transparently interchanged with that from another. And that doesn't describe hypervisors--not even close.
I continue to think that device, a relic of a time when electronics cost relatively more than they do today. That's not to say it never happens. Today, smartphones often function as MP3 players as well and game consoles increasingly are a portal to many types of digital entertainment, not just games. That said, the general trend seems to be toward more consoles, displays, PCs, pads, and mobile devices rather than fewer.
There's an important point to consider though. Reader cvaldes1831 sums it up nicely:
There is one very practical reason to limit the number of computers in the house: system administration time. Even for something that requires very little sysadmin effort (like my three-year-old MacBook), the minutes and hours add up.
I could afford several computers, but I simply don't want to administer more than one system (I'm single).
This is my experience as well. I have a laptop that I use as essentially my downstairs browser when watching TV or wanting to pull up a recipe while cooking. But if I haven't used it for a week, it invariably wants to install updates for Windows, Firefox, Java, or some other system component or program. It also doesn't necessarily start up right away. I put up with the nuisance in this case, but the "care and feeding" of a computer running a general-purpose operating system definitely limits how many such devices I have around the house.
Thus, "more devices" needs an asterisk. And that asterisk notes that updates must generally be automatic and unobtrusive, that new components or applications can't require complex installation, and that the device in question just works doing whatever it's supposed to do. Relative simplicity vs. complexity and single-function vs. multi-function will vary by the task at hand, but the common thread is that all will need to be much simpler than the PC of today.
These home devices may be computers, even vastly powerful ones by historical standards, but they can't appear as such.
5/4/10An article by Elisabeth Bumiller in The New York Times about "death by PowerPoint" in the U.S. military has been making the rounds. The following excerpt is representative:
Commanders say that behind all the PowerPoint jokes are serious concerns that the program stifles discussion, critical thinking and thoughtful decision-making. Not least, it ties up junior officers--referred to as PowerPoint Rangers--in the daily preparation of slides, be it for a Joint Staff meeting in Washington or for a platoon leader's pre-mission combat briefing in a remote pocket of Afghanistan.
As an industry analyst, I helped polish many, many slide decks. And I've created more than a few of my own. Some of the criticisms are certainly valid while others seem to me more about the nature of routine status meetings than the particular tool used to create material for those meetings.
It's not that PowerPoint and its competitors don't share any blame. Over time, they've gained features like gradient fills and shadows that encourage fiddling and the gratuitous use of graphical junk. Standard templates tend to the cluttered and garish. But the hierarchical bullets that are the target of many PowerPoint criticisms such as the following predate PowerPoint, indeed, predate personal computers:
Commanders say that the slides impart less information than a five-page paper can hold, and that they relieve the briefer of the need to polish writing to convey an analytic, persuasive point. Imagine lawyers presenting arguments before the Supreme Court in slides instead of legal briefs.
Captain Burke's essay in the Small Wars Journal also cited a widely read attack on PowerPoint in Armed Forces Journal last summer by Thomas X. Hammes, a retired Marine colonel, whose title, "Dumb-Dumb Bullets," underscored criticism of fuzzy bullet points; "accelerate the introduction of new weapons," for instance, does not actually say who should do so.
Ultimately, one of the reasons people like to use bullets is that it's a relatively easy way to structure a straightforward presentation, which may be fine for a routine meeting but probably isn't so good if you're trying to formulate a strategy or understand a problem.
Perhaps the biggest problem with most business presentations is that they're trying to do two things at the same time. They're "sliduments," to use a word coined by Garr Reynolds in his book, "Presentation Zen."
The problem is this. When your average business presentation is presented to someone, the primary leave-behind is usually the same slide deck, perhaps with some notes taken on it during the course of the presentation. Furthermore, while in an ideal world, the presenter would be capable of giving his pitch with no slide support whatsoever, the reality is that someone who only sort of knows the material often fills in at the last moment.
Both these factors push slides towards a worst-of-all-worlds state. They're still bullet points rather than more carefully crafted long-form text, but they have lots of bullet points because the slides need to be at least somewhat comprehensible in the absence of the actual presentation. Add to this the fact that bullets (and random stock images) are much easier to create than compelling and relevant graphics and you end up with slides that are doubtless all too familiar to just about everyone.
(As an aside, I find the graphic at the beginning of The New York Times article odd, as it doesn't have much to do with, and may even be in opposition to, the main point of the article. Complex and information-rich graphics have a role in particular circumstances. In any case, they tend to trump an endless march of bullet points.)
As many commented on the "" theme, tools don't create bad presentations, people do--even if the tools must share some of the blame for encouraging certain paths.
But, as a reader points out, we shouldn't just blame software in this regard:
It's just too hard and time consuming to freely draw diagrams and text, as on a whiteboard. Maybe this issue will finally improve with touch computing, but until then, we do all waste a lot of time...
This is an important and insightful point. We mostly use keyboards and mice to enter information into our computers. The keyboard is still similar to that of the first commercial typewriter sold by Remington, beginning in 1873. Even the mouse is nearly 50 years old.
With a keyboard and mouse, some things are natural and straightforward. Keyboards are obviously about entering sequential letters and numbers. Mice are optimized for tasks like selecting things, inserting standardized objects, and moving them around in a two-dimensional space.
Others aren't so natural--no matter how cleverly designed the software. While experienced designers can certainly construct complex images using a keyboard and mouse, quick napkin sketches, decision trees, informal charts, and so forth are far harder to create. The keyboard and mouse, as abetted by the software designed to be used with them, make it relatively easy to produce certain forms of polished professional content while discouraging forms that we routinely use to communicate absent such mechanical constraints.
Any number of other input devices have appeared over the years of course. Joysticks and game controllers are commonplace, but they're more about navigating through three-dimension space (and shooting anything that moves) than they are about creating. Tablets have probably come closest to providing a way to casually sketch. But tablets are surprisingly unnatural for most people because the surface you're drawing on isn't the surface where the output is displayed.
Screens on which you can draw can map far more directly to the physical world. Specialized examples of such screens aimed at graphics professionals are expensive today. But as multitouch displays become commonplace and even routine, it seems likely to me that they'll gain the critical mass to encourage presentation software that's optimized for them.
Polished graphics will still take time and work, of course. And perfecting the look of a major event keynote will require the work of professional designers. But we can hope that for routine day-to-day needs--whether presentations or long distance collaboration--the quick-and-dirty sketch will replace some professional-looking but ineffective bullet points.
Over the years, Apple has taken aim at business computing a number of times. Its last such foray was in 2002 when it rolled out its Xserve rackmount server.
That move was partly precipitated by Apple's introduction of the BSD Unix-based OS X operating system, which adhered to far more standards, interoperated with other systems far better, and was just less unique than previous Apple operating systems. The move could also be seen as Apple trying to do something, anything, that would let it break out of its declining niche on the desktop.
To really break through in the server arena and go beyond customers who already favor Apple would take a full-blown corporate commitment to expanding product horizons beyond the desktop, beyond cool consumer technology, and into the mundane-but-critical environment of the data center. So far, Apple has released a sweet product but hasn't demonstrated any substantial shift in server thinking and commitment.
To the degree that Apple ever seriously viewed the Xserve and other data center components as an important part of its future, that potential strategic thrust was largely mooted by another product line introduced by Apple the year before. On its second generation in 2002, the consumer products in that line were still widely viewed as overpriced and only of real interest to the Mac faithful. But that would change. I'm talking of course about the iPod.
So Apple effectively became a consumer electronics company. Even when it made its move into phones, features that were mostly of interest to businesses came slowly. For example, Exchange ActiveSync didn't come to the iPhone until version 2 of its operating system. Various security features required to connect to many corporate networks were similarly belated.
But, even though Apple remained mostly on the consumer side of the fence, that fence started to fall down. Citrix CEO Mark Templeton, among others, calls it the consumerization of IT. Whatever the name, it means that we're seeing something of a shift away from rigidly prescribed, IT-supplied client devices and towards an environment where many employees can choose what to use within a fairly broad set of parameters.
There are many reasons for this. The ubiquity of cell phones and even the widespread use of personal smartphones with data plans starts to make separate dedicated business devices seem a bit anachronistic for many situations. More and more corporate applications have Web front-ends that can be accessed from any securely-connected browser. The workforce is more mobile; employees don't just do business from a desktop system in the office. In short, for many people, there's a blurring of the personal and the professional that makes a clean separation of personal devices and professional ones difficult at best.
And this has played to Apple's advantage. That is makes consumer products is no longer a critical impediment to business sales when those buying client products are consumers.
With the last roll of Kodachrome slide film ever to be manufactured by Kodak now developed, a major chapter of the film photography era is winding down. Dwayne's Photo Service of Parsons, Kansas, is the only lab left in the world that still processes this type of film, and it plans to stop processing Kodachrome on December 10. Kodak itself had previously farmed out what remained of its in-house film processing business to Dwayne's in 2006.
earlier additive processes used filters, which limited quality. First developed for use as a movie film, the Kodachrome process -- in which three emulsions, each sensitive to a primary color, are coated on a single film base -- was invented by Kodak's Leopold Godowsky Jr. and Leopold Mannes in 1935.wasn't the first color film, but it was the first successful commercial film based on a subtractive process;
Modern color slide films all use a subtractive process. In this regard, Kodachrome essentially served as a template for all subsequent mass-market color slide films, including Kodak's own Ektachromes.
However, at a more detailed level, Kodachrome isn't much like other slide films. The differences are a big reason why Kodachrome remained popular for so long, in that they improved image longevity and helped photographers produce uniquely colorful images. But they're also the reason that maintaining this line of film became untenable for Kodak as its volumes shrank.
In most slide films, the couplers (color formers) for each subtractive color are added to the appropriate emulsion layers of the film when it is manufactured. Agfa pioneered this approach in 1936.
All dye images (typically three) are then formed simultaneously during the color developer stage of processing. E-6 is the name of the commercial process. While somewhat exacting with respect to time and temperature, running film through E-6 is still relatively straightforward. It's used by commercial labs to develop Ektachrome, Fujichrome, and essentially all other modern slide films; even amateurs can run film through an E-6 process at home.
Not so with Kodachrome.
The film is essentially a multilayer black-and-white film, meaning that the color formers have to be added in a very carefully controlled way during the development process, the current generation of which goes by K-14.
The first developer merely forms three superimposed negative images of the original scene, one in each of the red-, green-, and blue-sensitive emulsion layers. To introduce color, the film needs to be re-exposed to light multiple times through filters of various colors and subsequently developed in appropriate chemicals. These steps essentially colorize the unexposed and undeveloped silver halide in each the three emulsion layers, which is to say the positive image. The silver is then removed, leaving the three positive dye images.
It's a very exacting and complicated process relative to E-6 and has always been handled by a relatively small number of labs. (Prior to a 1954 consent degree, Kodak wouldn't even sell the chemicals needed to do the processing.)
Film sales have dropped significantly. Film isn't going away anytime soon. But lower volumes tend to lead to fewer choices. That Kodachrome is one of those choices being weeded out is certainly nostalgia-provoking. It was a favorite of many professional photographers. I myself liked it and shot many rolls, albeit fewer in recent years after Kodak started to come out with new generations of Ektachrome that I favored for many purposes.
And today, well, it's been well over a year since I've shot a roll of film. But that doesn't mean that I can't fondly remember Kodachrome.
Jason Scott's first documentary in 2005 was about bulletin board systems (BBSs), which were in a sense the PC world's parallel evolution of the early Internet. This documentary, really more a multi-disc series of interviews with BBS pioneers than a documentary film as such, brought back to me my early years in personal computing and my subsequent forays into shareware software development through the mid-1990s.
Now, Scott has tackled a subject from roughly the same era: the text adventure game. My involvement here was more peripheral but no less a part of my memories.
As his new "Get Lamp" documentary recounts, the text adventure genre began with Will Crowther's Colossal Cave Adventure game in the early 1970s, more commonly referred to as just Adventure. Crowther was a caver and was also involved with the initial development of the ARPAnet, the Internet's precursor, at Bolt Beranek and Newman (BBN) in Cambridge, Mass. Crowther himself isn't interviewed for the documentary, which describes him as the J.D. Salinger of computer games. He prefers his work to speak for itself. However, Don Woods, who later enhanced the game, does put in an appearance.
Fast-forward a few years to the late 1970s. Various MIT staff and students associated with the AI Lab were starting up a company called Infocom which would go on to become the most significant commercial text adventure game company.
Artificial Intelligence was a big MIT computer science focus of the time; the department that inhabits MIT's Stata Center still goes colloquially by CSAIL--Computer Science and Artificial Intelligence Laboratory. And some of the technology that went into Infocom's games starting with Zork had more than a passing relationship to AI research; perhaps the biggest technical challenge with these games was parsing and "understanding" freeform text entered by the human player and responding logically.
At the time, I was the publicity director for the Lecture Series Committee (LSC) at MIT and several of the Infocom founders such as Marc Blanc were regulars around the LSC office. A fair number of others involved with the group would join Infocom over time. (I personally did some casual volunteer game testing and paid a number of visits to Infocom's 55 Wheeler St. offices in Cambridge.)
Scott's documentary does a great job of capturing a gaming era which is ultimately hard to separate from the history of Infocom. Indeed, in addition to various extras, Scott has actually assembled two full videos. One is a broad history of text adventure games, starting with Adventure. The other focuses exclusively on the Infocom thread, both its beginnings and its fall after a relatively few years.
Of course, text adventures were giving way to graphics games in any case. "Shooters" like Castle Woflenstein became the game of choice. Natural language parsing, advanced as it was, was still hamstrung by limitations. Hard AI, after all, never really worked out.
Graphics-centric games became the norm. And the specs of the latest graphic card became intertwined with the latest generation of game.
But that's changing. The first-person-shooter (FPS) on the Xbox isn't going away. But casual games, which don't have such an intensely state-of-the-art graphic focus, seem to be staging a comeback.
Take social gaming for example.
Steve Meretzky wrote some of Infocom's major titles (my favorite being A Mind Forever Voyaging) and would, ahem, make me a very inside-baseball joke in one of them (Planetfall). He's now Playdom's VP of Game Design, a company recently , and the creator of games like Social City and Mobsters.
As Meretzky puts it: "Text games, like the early games in any genre or on any platform, prove that well-designed interactivity and meaningful player agency are the core elements of fun, and that all the polish and pizzazz that comes later is just sizzle on the steak. The early days of social gaming, with hits like Mobsters and Mafia Wars that were almost all text are just the latest examples of this."
The axis of game development has arguably shifted. At a minimum, it's also happening along new axes in response to mobility and interconnectedness. And it's starting to look a bit like a return to the past.
Certain ideas lurk largely at the boundaries of the IT industry, periodically making a push for a more central role. One such is the appliance or integrated stack--an assembly of hardware and multiple layers of software from a single vendor.
The argument for this concept revolves around simplifying the acquisition of technology and optimizing its operation.
Of course, vertical stacks were once simply the-way-systems-were-built. This model largely gave way to horizontal layers such as microprocessors, operating systems, and databases developed by different specialist vendors and brought together at the end user. (Former Intel CEO Andy Grove describes this shift in his book "Only the Paranoid Survive.")
However the "Web 1.0" era, circa 2000, brought vertical integration to the distributed systems world in the guise of so-called appliances, many intended to plug into the network and perform some newfangled Web-by function such as Web serving or video streaming. Cobalt Networks was perhaps the best known and most sophisticated but there were many of them, most of which wouldn't exist within a few years. For their part, many of the large system vendors also established appliance divisions. Those would soon be shuttered as well.
Appliances promised simplification and optimization but, in practice, they were widely viewed as too narrow and inflexible. Even software-only versions leveraging virtual machine technology have seen far more uptake as a way to distribute demos than as a way to deploy production applications. The fundamental issue is that, even though users are ultimately interacting with the application, it isn't really possible to fully abstract away and ignore many of the underlying pieces. The specifics of components like operating systems and servers have important implications for IT operations--however they're packaged.
: "Even if, say, a vendor solution is a "drop in" technology initially, the complexity and tradeoffs of a long-term dependency on the vendor adds greatly to the cost and complexity."
This highlights something that's been a major stumbling block for a lot of integration plays in a distributed systems world. Many technologies and products that may make sense in the context of a "green field" deployment make a whole lot less sense when they have to work alongside existing networking, storage, servers, operating systems, and so forth. Furthermore, even if integration makes something easier to initially install, that doesn't necessarily make maintaining it any easier. In fact, it can make updates and upgrades harder by introducing dependencies and requirements that are specific to a single platform.
As an IT industry analyst during the first server appliance boom, there was one question that I asked over and over. "How is hooking together a bunch of boxes from a bunch of different companies to perform a bunch of discrete functions going to simplify things?" I never got a good answer then and, even if the two situations aren't completely comparable, I'm not sure how the current talk of integrated stacks resolves this fundamental question either.
When I wrote a research note entitled "Latency Matters!" in 2002, I was primarily reacting to the tendency of computer system vendors to highlight how much data they could move around rather than how quickly that data could get from point A to point B. This made comparing server designs--one of my main areas of focus at the time--difficult given that the speed, rather than the amount, of data movement within and between various subsystems was often the more important metric. As I wrote:
Latency is the time that elapses between a request for data and its delivery. It is the sum of the delays each component adds in processing a request. Since it applies to every byte or packet that travels through a system, latency is at least as important as bandwidth, a much-quoted spec whose importance is overrated. High bandwidth just means having a wide, smooth road instead of a bumpy country lane. Latency is the difference between driving it in an old pickup or a Formula One racer.
More recently though, I've seen some great examples that highlight just how important small latency differences are in applications that go well beyond single systems or small clusters. Perhaps unsurprisingly, the financial services industry is driving a lot of this low-latency activity given that trading is all about getting as close to instantaneous as possible.
(I leave the economic and policy aspects of high-speed trading to others. I'd note though that the same page of The Wall Street Journal (PDF) that covers Hibernia Atlantic's announcement also discusses the "flash crash" report.)
One example is the prominence of financial firms like Credit Suisse and JP Morgan Chase in the Advanced Message Queueing Protocol (AMQP) working group. AMQP specifically is being driven by the desire for "open standards efforts between companies to automate electronic transactions are often hindered by the need to incorporate proprietary solutions at the messaging layers of such protocol stacks."
It's also extremely high-performance with native RDMA (Remote Direct Memory Access) Infiniband support, which lets it achieve extremely low end-to-end latencies in the microsecond range. This sort of extremely low latency within a data center was historically associated with smaller complexes of systems and simpler protocols. InfiniBand also continues to be widely used for certain types of high performance computing grids.
However, perhaps the most striking example of the importance of latency even across long distances comes by way of Hibernia Atlantic's announcement on September 30 that they are planning to build the lowest latency cable from New York to London to offer high frequency traders 60 millisecond latencies, which will be the fastest link across the Atlantic.
The first phase of Project Express will begin with a new cable from the county of Somerset in the U.K., to Halifax in Canada where it will then connect to Hibernia's current low latency cable from Halifax to New York. In addition, the new system will include branching units for future latency enhancements to the U.S. and Continental Europe. This work is projected to be completed by the summer of 2012.
Nate Anderson at Ars Technica notes that "operators can plan their geographic routes strategically to keep the total cable length a bit shorter than the competition. According to the consultants at Telegeography, breaking 60ms would make Project Express at least 5ms faster than its closest competitor."
The driving factor here is that, as described by Doug Cameron and Jacob Bunge in the Journal, there's "intense competition to harvest profits from often tiny movements in the price of securities and derivatives." This new transatlantic cable offers a window into how this sort of arbitrage is increasingly global, rather than regional, in scope and is limited only by technology and the laws of physics.
The first transatlantic fiber optic cable in a decade for the purpose of shaving 5 milliseconds off transaction times. Latency very much still matters. Perhaps more than ever.
Asking why cloud computing is happening today is something of a tautology. That's because an inclusive definition of cloud computing essentially equates it with a broad swath of the major advances happening in how IT is operated and delivered today.
Pervasive virtualization, fast application and service provisioning, elastic response to load changes, low-touch management, network-centric access, and the ability to move workloads from one location to another are all hallmarks of cloud computing. In other words, cloud computing is more of a shorthand for the "interesting stuff going on in IT" than it is a specific technology or approach.
But that doesn't make the question meaningless. It would be hard to argue that there isn't a huge amount of excitement (and, yes, hype) around changing the way that we operate data centers, access applications, and deploy new services. So forget the cloud computing moniker if you will. Why is this broad-based rush to do things differently happening right now?
The answer lies in how largely evolutionary trends can, given the right circumstances, come together in a way that results in something that's quite revolutionary.
Take the Internet. The first ARPANET link--the Internet's predecessor--dates to 1969. Something akin to hypertext was first described by Vannevar Bush in a 1945 article and Apple shipped Hypercard in 1984. But it took the convergence of things like inexpensive personal computers with graphical user interfaces, faster and more standardized networking, the rise of scale-out servers, the World Wide Web, the Mosaic browser, open source software like Linux and Apache, and the start-up culture of Silicon Valley to usher in the Internet as we know it today. And that convergence, once it began, happened quite quickly and dramatically.
The same could be said of cloud computing. The following interrelated trends are among those converging to make cloud computing possible.
Comfort level with and maturation of mainstream server virtualization. Virtualization serves as the foundation for several types of cloud computing including public Infrastructure-as-a-Service clouds like Amazon's and most private cloud implementations. So, in this respect, mature server virtualization software is a prerequisite for cloud computing. But the connection goes beyond technology. Increasingly ubiquitous virtualization has required that users get comfortable with the idea that they don't know exactly where there applications are physically running. Cloud computing is even more dependent on accepting a layer of abstraction between software and its hardware infrastructure.
The build out of vendor and software ecosystem alongside and on top of virtualization. From a technology perspective, cloud computing is about the layering of automation tools, including, over time, those for policy-based administration and self-service management. From this perspective, cloud computing is the logical outgrowth of virtualization-based services or--put another way--the layering of resource abstraction on top of the hardware abstraction that virtualization provides. Cloud computing can also involve concepts like pay-per-use pricing, but these too have existed in various forms in earlier generations of computing.
Browser-based application access. The flip side of mobile workloads is mobility of access devices. Many enterprise applications historically depended on the use of specific client software. (In this respect, client-server and then PCs represented something of a step back relative to applications accessed with just a green-screen terminal.) The trend towards being able to access applications from any browser is essentially a prerequisite for the public cloud model and helps make internal IT more flexible as well. I'd argue that ubiquitous browser-based application access is one of the big differences between today's hosted software and Application Service Providers circa 2000.
Mobility and the consumerization of IT are also driving the move to applications that aren't dependent on a specific client configuration or location. For more than a decade, we've seen an inexorable shift from PCs connected to a local area network to laptops running on Wi-Fi to an increasing diversity of devices hooked to all manner of networks. Fewer and fewer of these devices are even supplied by the company and many are used for both personal and business purposes. All this further reinforces the shift away from dedicated, hard-wired corporate computing assets.
The expectations created by consumer-oriented Web services. The likes of Facebook, Flickr, 37signals, Google, and Amazon (from both Amazon Web Services and e-commerce services perspectives) have raised the bar enormously when it comes to user expectations around ease of use, speed of improvement, and richness of interface. Enterprise IT departments rightly retort that they operate under a lot of constraints--whether data security, line-of-business requirements, or uptime--that a free social-media site does not. Nonetheless, the consumer Web sets the standard and IT departments increasingly find users taking their IT into their own hands when the official solution isn't good enough. This forces IT to be faster and more flexible about deploying new services.
And none of these trends really had a single pivotal moment. Arguably, virtualization came closest with the advent of native hypervisors for x86 servers. But, even there, the foundational pieces dated to IBM mainframes in the 1960s and it took a good decade even after x86 virtualization arrived on the scene to move beyond consolidation and lightweight applications and start becoming widespread even for heavyweight business production.
The richness of Web applications and the way they're accessed are even more clearly evolutionary trends which, even now, are still very much morphing down a variety of paths, some of which will end up being more viable than others. Developments like, , , smartphones, tablets, and are just a few of the developments affecting how we access applications and what those applications look like.
Collectively, there's a big change afoot and cloud computing is as good a term for it as any. But we got here through largely evolutionary change that has come together into something more.
And that's a good thing. New computing ideas that require lots of ripping and replacing have a generally poor track record. So the fact that cloud computing is in many ways the result of evolution makes it more interesting, not less.
Computers that reliably understand human communications have been a staple of fiction going back decades or more. The Enterprise's computer in the 1960s vintage "Star Trek" series is as good an example as any. And truth is, that particular science-fictional ability probably would not have seemed all that remarkable to the typical person of the time.
Access billions of pages of text, pictures, and video from a gadget I can fit in my pocket? Play a game with immersive graphics on a huge, high-resolution screen that hangs on the wall? For a computer engineer, the fact that those inexpensive consumer devices have more computing power than all the then-computers in the world would impress as well. But understanding speech? That's something a toddler can do.
But understanding speech has turned out to be really difficult. In fact, just converting speech to text has been a huge challenge. Indeed, when contest televised beginning tonight, the questions will be fed to it as text, rather than speech. But answering the often convoluted questions used on "Jeopardy" is hard enough even without processing the spoken word.in a
Although this contest takes place in the artificial setting of a game show, it does give us a glimpse into what is possible and what is not with artificial intelligence, that is AI, today. And perhaps where AI is going.
AI research is generally considered to have launched in 1956 at the Dartmouth Summer Research Conference on Artificial Intelligence. The hope of many researchers at that time was that they would be able to create a so-called "strong AI" over the next few decades--which is to say an AI that could reason, learn, plan, and communicate. Research in this vein has produced very limited results. One of the big problems has been the almost equal lack of progress in understanding how humans think. Thus, the failure of strong AI may well be related to the equal lack of progress in significant areas of cognitive psychology.
Some of the AI pioneers still have a more optimistic view. MIT's Marvin Minsky places the blame more on a shift away from fundamental research. As he puts it, "The great laboratories somehow disappeared, economies became tighter, and companies had to make a profit--they couldn't start projects that would take 10 years to pay off."
So Watson is in no real sense thinking and the use of the term "understanding" in the context of Watson should be taken as anthropomorphism rather than a literal description.
Is Watson just about brute force then? One might think so. Its hardware specs are impressive:
IBM Watson is comprised of ninety IBM POWER 750 servers, 16 Terabytes of memory, and 4 Terabytes of clustered storage. This is enclosed in ten racks including the servers, networking, shared disk system, and cluster controllers. These ninety POWER 750 servers have four POWER7 processors, each with eight cores. IBM Watson has a total of 2880 POWER7 cores.
To put this in perspective, by my estimate, Watson would have been the fastest supercomputer in the world on the TOP500 list just five years ago. And, although the disk and memory specs aren't nearly so impressive, remember that we're just talking about text-based data here. In fact, it's loaded with millions of documents--making the fact that it, like the human contestants, isn't hooked up to the Internet something of a red herring.
Chris Anderson, the editor in chief of Wired, argues that data often replaces underlying theory. He goes on to quote Peter Norvig, Google's research director: "All models are wrong, and increasingly you can succeed without them."
But thinking of Watson as just a big, fast computer that just points to Wikipedia or the Oxford English dictionary and the right answer pops out understates the complexity of the natural language processing that has to go on. If Jeopardy consisted solely of grade-school type questions--excuse me, answers--like "the 42nd president of the United States," this would in fact be a relatively simple exercise. But many Jeopardy questions consist of wordplay, riddles, and other barriers to literal lookup of answers.
Watson is part of IBM's DeepQA project. The QA stands for question answering. As IBM researchers put it:
the open-domain QA problem is attractive as it is one of the most challenging in the realm of computer science and artificial intelligence, requiring a synthesis of information retrieval, natural language processing, knowledge representation and reasoning, machine learning, and computer-human interfaces.
In association with Carnegie Mellon University, IBM created the Open Advancement of Question Answering (OAQA) initiative "to provide a foundation for effective collaboration among researchers to accelerate the science of automatic question answering." Among other things, this initiative is intended to enable adapting Watson's software to new data domains and problem types.
Although Watson is certainly a powerful computer loaded with lots of data, as described in this PBS video, the software is very much a key ingredient here; many new algorithms and approaches were needed to make Watson competitive with strong human players. For example, to learn from examples, to understand how context affects the significance of names and places, and to correlate multiple facts in a particular answer.
Strong AI proponents may well view something like Watson as something of a parlor trick in that it doesn't really try to reason as a human does. But, that said, there's much more--dare we say intelligence--involved here than there is in playing chess, a well-bounded and formalized problem. And given the longtime difficulty of understanding real intelligence, this is the AI path that seems to hold the most promise for now.
The idea of convergence, of one device replacing several, has long been a popular theme in forecasting high-tech gadgetry. It's also something that doesn't happen as often as predicted.
Some of the reasons relate to design and technology. It's hard to make a multitool as elegant for each individual function as specialist devices are. A form factor that's optimized around, say, being a phone demands serious technical compromises when it comes to a totally different function, such as taking a picture. And rapidly evolving technology means some functions in a device are inevitably behind the technology curve.
Increases in computing power and storage density alleviate some problems over time,. And buying, feeding, and caring for fewer devices is usually preferable to dealing with more--to the point that compromises are often acceptable, especially for occasional or casual use.
But device categories still seem to collapse together less often or more slowly than predicted.
Economics is one reason. The same forces leading to faster gadgets with more storage have also--in concert with streamlined supply chains and offshore manufacturing--made them ever cheaper. In short, consumer electronics are not especially expensive by any historical standard, so lopping 20 percent off the bill by awkwardly mixing multiple functions together just isn't that big a win.
It strikes me, though, that the bigger reason why certain classes of devices remain so distinct is that we tend to interact with them in fundamentally different ways. And that's all too often overlooked in an industry that frequently views things through the engineering lens of what's possible rather than the user experience lens of what's natural.
For example, I don't know how much money has been squandered pretending that a TV is a big computer monitor that sits in front of a sofa. But it must be billions. WebTV, Intel's Viiv brand, and--who knows?--Google TV are just a few of the bones littering this computer landscape. There are a lot of complexities here, not least of which is content licensing and protection. But perhaps the biggest issue is that we don't use TVs the same way that we use computers.
There's even industry lingo for the difference. TVs are a lean-back, or 10-foot, experience. Computers are a lean-in, or 3-foot, experience. One is largely passive; the other is intensely interactive. This is a difference that I doubt would be bridged by a better remote control. Yes, viewers do increasingly want to select their shows rather than just accept what's coming over a broadcast stream, but that's a different statement from saying they want to tweet and comment and otherwise be part of the content in real time.
Given that video content increasingly comes from the Web in some form, I do think it makes sense to find easier ways to "throw" a video from laptop device to the TV hanging on the wall. But that's a different model than interacting on the TV itself.
Nor is it a coincidence that tablets suddenly went mainstream right at the time they came to market with user interfaces designed specifically for phones/tablets and not PCs.
After all, tablets are not new. A former IT analyst colleague was toting his Fujitsu tablet around conferences at least five years ago. Certainly, the size, weight, and cost of various components had to reach a certain point for tablets to be broadly viable. And, at least arguably, needed a company like Apple to make a market for a new product category that pushed the envelope of what was possible.
But I'd argue that the bigger change was that the tablet broke from an interaction model that was rooted in a PC operating system and therefore was keyboard, mouse, and stylus centric to one that's multitouch-centric. The tablet as it has evolved isn't a PC without a keyboard; it's something fundamentally different. Better at some things and not as good at others. And fundamentally different.
The main thrust of early cloud computing discussions--even before that particular term became popular--was fundamentally about economies of scale. For example, in his book "The Big Switch," author Nick Carr writes that: "Once it becomes possible to provide the technology centrally, large-scale utility suppliers arise to displace the private providers."
This was an imagined future of computing that reprised a journey taken by power generation technology in which expensive and customized local water turbines or steam engines driving gears and belts largely gave way to motors connected to the electric grid. This early-on discussion wasn't so much about how computing took place--presumably it would be handled efficiently by the mega-service providers offering this new type of utility. Rather it was about the economic model: standardized pay-as-you-go services delivered at massive scale.
The evolution of the electric grid was presented as the clearest parallel to cloud computing. However, many types of industrial processes are more efficient at large scale; backyard steel production doesn't work out well.
But something funny happened on the way to the cloud. Many applications, especially those used by consumers and smaller businesses, did indeed start shifting to public cloud providers like Google. However, with some exceptions, the trend in large organizations is something quite different., the idea of there being a "Big Switch" in the sense of all computing shifting to a handful of mega-service providers is, at a minimum, overstated.
In part, this is because computing is a lot more complicated than electricity. The electrons powering a motor don't have privacy and security considerations. The electrons encoding a Social Security number in a data center do. Plenty of other technical and governance concerns also conspire to make computing less utility-like.
However, as CA's Andi Mann writes in a recent post on his personal blog, even the cost benefits of public clouds aren't necessarily a given:
Public cloud can be cheaper than on-premise IT or private cloud, especially for selected services and SMBs. However for large enterprises, while there are plenty of reasons to use public cloud, cost reduction is not always one of them.
Public cloud certainly has a low start-up cost, but also a long ongoing cost. For all practical purposes, the ongoing cost is never-ending too. As long as you need it, you keep paying as much as you did on day one, without adding an asset to your books or depreciating your facilities investments.
The only hard data I've seen is in a McKinsey report that Mann also references in his post. However, when I speak with CIOs at large enterprises, I don't think I've heard one argue that public cloud resources can universally reduce costs. And this isn't a matter of reflexive "server hugging." There is equal unanimity that using shared resources for certain workloads and use cases does save money and bring other benefits.
The key to economically running many or most IT services internally seems to be a level of scale at which, to use a term from retired IBMer Irving Wladawsky-Berger, data center operations can be "industrialized"--which is to say standardized, process-driven, and highly automated. In other words, at a scale that the operational processes associated with large public cloud providers can be implemented in a dedicated manner for a single organization.
From the perspective of a company that owns and operates its own facilities, this point is probably somewhere around one or two data centers (given the need for some spare capacity for redundancy), although co-location providers and other ways of obtaining dedicated capacity within a shared physical infrastructure may drive necessary scale points even lower. And these economic realities are reflected in the forecasts of IT analyst firms like Forrester and IDC, all of which see rapid growth in private clouds.
And it's this widespread interest in building private clouds that's been one of the big surprises of cloud computing's still early years. The cloud discussion began as a shift to a fundamentally different economic model under which even large organizations would rent computing rather than building and owning it. Some of that's happening, but it's turning out to be just part of the cloud computing storyline.
Indeed, for organizations that view IT as a strategic asset--and more and more do--cloud computing is often less about adopting public clouds for their low costs and more about adopting their processes and applying them to the private cloud. In this case, cloud computing is far more about helping the business increase revenues than cutting the total cost of IT.
Tracy Kidder's 1982 Pulitzer-Prize-winning work of book-length reportage, "The Soul of a New Machine," is perhaps the best narrative of a technology-development project ever written. It's up there with "The Mythical Man Month" and "Showstopper." And the hero of that book was Tom West. The pages open with Tom at the helm of a sailboat in a storm. "In the glow of the running lights, most of the crew looked like refugees, huddled, wearing blank faces. Among them, Tom West appeared as a thin figure under a watch cap, in nearly constant motion."
Tom West, 71, died Thursday at his home in Westport, Mass., yet another mark of the passing of the minicomputer era, so important to the way the computer era has evolved--and, indeed, how Massachusetts evolved as part of the computer industry.
I'm certainly not impartial here. My first computer industry job was with West's employer, Data General, as product manager of the MV/7800, a 32-bit minicomputer that trailed the "Eagle," the MV/8000 32-bit minicomputer that was the subject of "The Book" (as Kidder's work was eventually called around DG). The MV/8000 was essentially DG's answer to Digital Equipment Corp.'s VAX, which had leapfrogged some major DG advancements in 16-bit minis. I would product manage many DG systems over the years.
Tom would, much later, "borrow" me to be part of an effort to roll out modular Unix Non-Uniform-Memory-Access servers. The concept would ultimately become standard practice. But DG's ability to profit from it would be hamstrung by the lack of a standardized Unix operating system in the years before Linux matured. (SCO's abortive efforts around Datacenter-quality Unix would be part of the problem.) That said, NUMA servers would be, in many ways, the last hurrah of DG's AViiON server division.
West did more than build minicomputers and advance them to Unix servers though, however complicated and nonobvious a process it may have seemed at the time.
In the late 1990s, while he was also orchestrating the shift from minicomputers to Unix, Tom was setting up a group to investigative projects that could take advantage of the mainstream Internet. The idea that a group of people within a corporation could do their own thing on their credit cards was a novel concept at the time. ThinLiine, which encompassed home-networking territory eventually tackled by the likes of Linksys, turned out to be ahead of its time.
In terms of charting DG's ultimate path, it was CLARiiON that made the most difference. West pushed the new-fangled idea of RAID--Redundant Array of Inexpensive Disks. The idea was that you could design things so that an individual disk could fail but it wouldn't much matter. West's CLARiiON concept commercialized this idea, brought it to market in a way that wasn't tied to DG servers, and ultimately made a failing server company attractive as an acquisition target to EMC. Arguably, the more midrange approach espoused by CLARiiON saved EMC during the Internet meltdown relative to the uber-high-end approach taken by traditional EMC Symmetrix.
It's easy to be dismissive of the whole Route 128 era of computer tech. Steven Levy's "Hackers" largely is. But I'll argue--and not just because I've worked for, or known, many of them--that the likes of Tom West, Ken Olsen, Ed DeCastro, and many others have ultimately shaped much of where we are today.
In early June, market researcher IDC cut its forecast growth for PCs in 2011 from 7.1 percent to 4.2 percent. Gartner Group, another IT analyst firm, had earlier trimmed its numbers. Meanwhile, Apple iPad sales continue to skyrocket, with 183 percent growth compared to the year-ago quarter. Android tablets when it comes to gaining buyers, but they'll gain traction over time as well.
Those numbers would seem to lay out the case for a post-PC world rather starkly. Especially when you consider that they don't even consider phones which, in many emerging markets, are the "computer" of choice. Indeed, IDC explicitly attributes some of the soft demand to new types of devices. "Consumers are recognizing the value of owning and using multiple intelligent devices and because they already own PCs, they're now adding smart phones, media tablets, and eReaders to their device collections," said Bob O'Donnell, an IDC vice president. "And this has shifted the technology share of wallet onto other connected devices."
However, if you're, say, the major music labels, that sort of growth would be considered amazingly good. Album sales continued their free fall in 2010, falling another 13 percent in what has become a rather predictable year-end accounting. Or you could be Eastman Kodak. A recent quarter saw its film business revenue declining by 14 percent.
Against that backdrop, it's hard to call a product class with single-digit positive growth (with higher growth forecast for the longer term) that's headed to about 360 million units sold in 2011 passe. Certainly compared to things that are well on their way to becoming niches even in the near term.
There's a reason for this. As, start creating presentations, working with big spreadsheets, or otherwise engage in many types of content creation and a tablet quickly gets out of its comfort zone. These tasks aren't impossible for the most part but they are usually a lot easier and more straightforward on a notebook.
In short, the PC is hardly dying even if its growth slows and it stops being the default choice of client device for as many different things.
That said, it's both fair and meaningful to use the "post PC" shorthand. By way of (doubtless imperfect) analogy, there was once an "Age of Rail" when that mode of transportation was very much at the center of the transportation and economic world. We still have railroads but it would be hard to critique someone who, sometime in the 1950s, opined that we were now in a "post rail" world.
For one thing, even beyond sales numbers, I see a huge amount of evidence that tablets are changing all manner of long-held ways of accessing and consuming information and media. I was speaking at the Campus Technology 2011 event in Boston yesterday and iPads were perhaps the biggest single topic of conversation. One anecdote that particularly struck me was the observation that, after getting accustomed to tablets for about a year, students want to move away from traditional textbooks en masse.
Furthermore, tablets aside, PC-centricity is very much a developed markets view. Move beyond the U.S., Western Europe, and Japan, and much of the rest of the world is centered on phones--both smart and otherwise.
The shift to hosted services from Facebook to Google to iCloud are a big part of the shift. The PC as the home's digital hub never really happened in the purest sense envisioned by the likes of Intel's Viiv initiative. However, the PC was nonetheless mostly the place where you stored your music and downloaded your software. That hub is rapidly moving out into the network and local devices can increasingly be "disposable."
Finally, these changes are noteworthy because they have major implications for the vendor landscape. The PC era evolved to something of a monoculture that was maintained in large part because the application ecosystem placed a heavy premium on having a universal (or at least near-universal) processor and operating system platform. That's no longer nearly so much the case in a post-PC world.
In fact, what's so notable about the computer programming language landscape over time isn't so much its diversity and adaptability, but rather its inertia. COBOL and Fortran, the longtime standards for business and scientific programming respectively, remain in use albeit less widely so than at one time. Object-oriented programming, which encapsulates data together with the associated functions that operate on that data for more structured and maintainable code, came into initial widespread use largely through extending an existing language, C. C itself, originally designed as a language for programming systems at a very low level, was put into use for all sorts of application programming tasks for which it was arguably not very well-suited.
Does this change with cloud computing or, to be more precise, with an increased emphasis on browser-centric application access, big and unstructured data processing, and the development of a huge mobile ecosystem?
Public platform-as-a-service clouds introduce new possibilities to broaden the Web programming landscape. However, to the degree that an application programming interface (API) is limited to a single provider, moving an application elsewhere will require at least some porting. As a result, while we do see some providers offering APIs that are specific to a hosted environment, there's a strong argument for the flexibility of application portability across on-premise and a variety of hosted clouds.
The overall picture I see is one of change, but change that is mostly evolutionary and that doesn't involve a lot of radical overnight change from existing models.
As Al Gillen, program vice president for system software at market researcher IDC puts it in reference to a survey I helped put together for VMworld in San Francisco at the end of August: "thought it very revealing that "yesterday's'' frameworks were target for "tomorrow's" apps." He went on to write that "tools will evolve and utilize new programming frameworks, then use will evolve over time, not so revolutionary."
Probably no data-mining legend has been more pervasive than the "beer and diapers" story, which apparently dates back to an early 1990s project that data-warehousing pioneer Teradata (then part of NCR) conducted for the Osco Drug retail chain.
As the story goes, they discovered that beer and diapers frequently appeared together in a shopping basket on certain days; the presumed explanation was that fathers picking up diapers bought a six-pack when they were out anyway. This correlation was then used to optimize displays and pricing in the stores.
That's the story anyway. The reality, as best anyone can determine, is more muddled. The evidence suggests that the project indeed existed. However, the beer-diapers correlation may or may not have been supported by the data. And, in any case, Osco seems not to have made any subsequent changes taking advantage of the purported relationship. That the story has lasted so long says more about the dearth of compelling success stories than anything else.
This isn't to suggest that data mining has never delivered any value. But I think it's fair to say that the gap between vendor marketing claims and gaining insights that were actually useful has been considerable. Data mining might tell Home Depot that it sells more snow shovels in the north than in the south and in winter than in summer--but the Home Depot store manager in Minneapolis doesn't need a sophisticated computer system to tell him that. (Though, as I'll get to, more has probably been going on behind-the-scenes than is generally known.)
But I'm starting to see evidence that this is changing. At least a bit. A lot of hard problems remain. This presentation by Paul Lamere and Oscar Celma (PDF) does a nice job of laying out the challenges with music recommendation, for example. But I'm also seeing enough "real world" data-mining anecdotes that it's hard not to take notice.
For example, Sasha Issenberg wrote in Slate earlier this month that "as part of a project code-named Narwhal, Obama's [re-election campaign] team is working to link once completely separate repositories of information so that every fact gathered about a voter is available to every arm of the campaign. Such information-sharing would allow the person who crafts a provocative e-mail about contraception to send it only to women with whom canvassers have personally discussed reproductive views or whom data-mining targeters have pinpointed as likely to be friendly to Obama's views on the issue." This contrasts with past practice whereby e-mails were more shotgun and stuck to relatively safe and unprovocative topics as a result.
In a recent New York Times article, Charles Duhigg wrote about how Target statistician Andrew Pole "was able to identify about 25 products that, when analyzed together, allowed him to assign each shopper a 'pregnancy prediction' score. More important, he could also estimate her due date to within a small window, so Target could send coupons timed to very specific stages of her pregnancy." Duhigg then goes on to tell a story about how, in one case, Target apparently knew about a high schooler's pregnancy before her father did.
As it turns out, the events recounted in Duhigg's story are not especially recent; Pole did his initial work in 2002. But it's not an area of its business Target wants to discuss. In part, this is doubtless because it views what it does with data mining as a trade secret. However, I'm sure it also stems from the reality that a lot of people find this sort of analysis at least a little bit "creepy" (to use the most common word being tossed around the Internet about this story).
More and more disparate data sets are available online and the tools to analyze them are getting both better and cheaper. Distributed server farms, public cloud-computing resources, open-source software including large-scale distributed file systems and Hadoop are just some of the tools that are starting to make this sort of analysis more mainstream (although many of the data sets are still proprietary and expensive).
But the challenges ahead won't just be technical. They'll be about what types of mining are considered right and proper and what aren't. As the Times noted in its article, "someone pointed out that some of those women might be a little upset if they received an advertisement making it obvious Target was studying their reproductive status."
SANTA CLARA, Calif.--That this week's O'Reilly Strata data conference was sold out says a lot about this corner of tech. It's hot. Like cloud computing, big data is all the rage, even if, like cloud computing, it's not so much a single thing but an intersection of technologies, market needs, and critical mass.
One of several themes that kept popping up this week was data vs. models.
In 2008, Wired's Chris Anderson wrote a provocative article titled "The End of Theory: The Data Deluge Makes the Scientific Method Obsolete." His thesis was that we have historically relied on models in large part because we had no other choice. However, "with enough data, the numbers speak for themselves." The counterargument is that useful insights don't just pop out of data. You have to ask the right questions.
The contrast between these two approaches came up in a lot of presentations. Overall, the speakers mostly sided with algorithms and models over just throwing more data into the mix. As Xavier Amatriain of Netflix put it, "data without a sound approach becomes noise." Yet Amatriain also gave insight into how finding the best results requires blending many different approaches, including adding additional types of data as appropriate.
The algorithms stemming from the much-ballyhooed Netflix Prize are actually a small piece of Netflix's overall movie recommendation process. There are a couple of reasons. The first is that the winning algorithms turned out to be very computationally intensive, in addition to being inflexible in other ways. The more important reason though is that predicting how customers would rate a movie, the objective of the Netflix Prize, was never the ultimate objective. That was to deliver better recommendations and, thereby, presumably increase the likelihood that they would remain Netflix subscribers. It turned out that marginally improving ratings prediction only went so far in improving recommendations overall.
Netflix therefore combines personalization, a wide range of algorithms, a huge amount of A/B testing (whereby different approaches are tried with different customer groups and the results evaluated), data from external sources, and even some randomness for serendipity. Data certainly plays a role, in fact a very central role, but it's far more complicated than feeding in the biggest possible datasets and letting the machine learning algorithms churn.
(That said, Amatriain noted that certain types of problems, such as natural language recognition, use so-called "low bias models" that benefit from a lot of training data.)
Other examples come from the talk given by Hal Varian, Google's chief economist, who showed off Google Correlate. This tool lets you explore how search trends relate to data--such as time series economic data. This opens up possibilities such as finding leading indicators in search data for various types of economic activity.
Google Correlate obviously depends on access to Google's vast database of search terms. However, Varian's talk also touched on many of the complexities of interpreting correlations. For some purposes, it makes sense to seasonally adjust data, and for others it doesn't. You have to choose search patterns intelligently. You need to use appropriate statistical techniques to interpret the results.
These two examples, as well as others, nicely sum up the data vs. models question. There's a wealth of data both within and outside of organizations that has the potential to improve business results. But most insights won't come simply. They'll come through intelligent questions, intelligent algorithms, and intelligent selection of data sets. And, ultimately, the insights will improve the business only if they're then put into action.
It's dangerous for those of us in the tech industry to naively take what we see playing out in our workplaces every day as a mirror of the wider world. High-tech workers are often more technically savvy and likely to be early adopters. High-tech employers are likewise more inclined to let employees use the tools of their choice. And high-tech companies as a group are, almost by definition, far closer to technology adoption's leading edge.
Which raises the question of whether all the personal gadgets from smartphones to tablets to laptops that appear to be an increasingly integral part of most high-tech workplaces represent a broader norm or just a tech industry anomaly. Forrester Research's Frank Gillett recently published a report that takes a look at this question and highlights some of the findings in a blog post.
Forrester's "latest Forrsights workforce employee survey asked more than 9,900 information workers in 17 countries about all of the devices they use for work, including personal devices they use for work purposes." I found the results a bit eye-opening. I wasn't especially surprised to see that "IT consumerization," as the trend of bringing personal technology into the workplace is often called, does indeed appear to be a broad phenomenon. But I still sat up a bit because of just how big and how rapid the change has been. A couple of examples.
About 74 percent of the information workers in the survey used two or more devices for work -- and 52 percent used three or more.
When you dig into the data, the mix of devices used for work was different than what IT provides. About 25 percent were mobile devices, not PCs, and 33 percent used operating systems from someone other than Microsoft.
The vast majority of these gadgets aren't "Gillett notes that: "If you only ask the IT staff, the answer will be that most use just a PC, some use a smartphone, and a few use a tablet." It's something of an irony that BYOD in its original tops-down, vendor- and IT-driven sense has largely fallen flat even while grassroots BYOD is going gangbusters." (BYOD) in the sense of a formal IT program that provides a stipend for employees to purchase specific types of devices.
That said, a new report by market researcher IDC that looked at BYOD trends in Australia and New Zealand suggests that more formal BYOD programs may become more common. "Widely publicized and high-profile BYOD case studies are further adding to the peer pressure. One in every two organizations are intending to deploy official BYOD policies, be it pilots, or partial- to organizational-wide rollouts, in the next 18 months," said Amy Cheah, market analyst for Infrastructure.
What's perhaps the more interesting tidbit in this report though is when it offers something of a counterpoint to the assumption that BYOD is something that everyone outside of IT strongly wants and prefers. Cheah writes that "IDC's Next Generation Workspace Ecosystem research has found that only two out of ten employees want to use their own device for work and for personal use, which means corporate devices are still desired by the majority."
Why the apparent disconnect between the apparent pervasiveness of employee-purchased devices in the workplace and the continued desire for IT-supplied hardware? I think that Vittorio Viarengo is onto something when he wrote me that: "It is not about BYOD. It is about SYOM (Spend Your Own Money). That's why people like corporate devices."
It's increasingly common practice for people to use their personal smartphones for both business and pleasure, whether their cell phone bills are subsidized or not. And there doesn't seem to be a widespread expectation that employers will start buying tablets for their employees.
However, most companies still buy and support business PCs. I suspect we're seeing a certain lack of enthusiasm on the part of many employees for that part of the status quo to radically change, especially if it means turning a company expense into a personal one.
Cloud computing seems to often get used as a catch-all term for the big trends happening in IT.
This has the unfortunate effect of adding additional ambiguities to a topic that's already laden with definitional overload. (For example, on a topic like security or compliance, it makes a lot of difference whether you're talking about public clouds like Amazon's, a private cloud within an enterprise, a social network, or some mashup of two or more of the above.)
However, I'm starting to see a certain consensus emerge about how best to think about the broad sense of cloud, which is to say IT's overall trajectory. It doesn't have a catchy name; when it's labeled at all, it's usually "Next Generation IT" or something equally innocuous. It views IT's future as being shaped by three primary forces. While there are plenty of other trends and technology threads in flight, most of them fit pretty comfortably within this framework.
The three big trends? Cloud computing, mobility, and "big data."
Through the lens of next-generation IT, think of cloud computing as being about trends in computer architectures, how applications are loaded onto those systems and made to do useful work, how servers communicate with each other and with the outside world, and how administrators manage and provide access. This trend also encompasses all the infrastructure and "plumbing" that makes it possible to effectively coordinate data centers full of systems increasingly working as a unified compute resource as opposed to islands of specialized capacity.
Cloud computing in this sense embodies all the big changes in back-end computation. Many of these relate to Moore's Law, Intel co-founder Gordon Moore's 1965 observation that the number of transistors it's economically possible to build into an integrated circuit doubles approximately every two years. This exponential increase in the density of the switches at the heart of all computer logic has led to corresponding increases in computational power. (Although the specific ways that transistors get turned into performance has shifted over time.)
Moore's Law has also had indirect consequences. Riding Moore's Law requires huge investments in both design and manufacturing. Intel's next-generation Fab 42 manufacturing facility in Arizona is expected to cost more than $5 billion to build and equip. Although not always directly related to Moore's Law, other areas of the computing "stack" -- especially in hardware such as disk drives -- require similarly outsized investments. The result has been an industry oriented around horizontal specialties such as chips, servers, disk drives, storage arrays, operating systems, and databases rather than, as was once the case, integrated systems designed and built by a single vendor.
This industry structure implies standardization with a relatively modest menu of mainstream choices within each level of the stack: x86 and perhaps ARM for server processors, Linux and Windows for operating systems, Ethernet and InfiniBand for networking, and so forth. This standardization, in concert with other technology trends such as virtualization, makes it possible to create large and highly automated pools of computing that can scale up and down with traffic, can be re-provisioned for new purposes rapidly, can route around failures of many types, and provide streamlined self-service access for users. Open source has been a further important catalyst. Without open source, it's difficult to imagine that infrastructures on the scale of those at Google and Amazon would be possible.
The flip side of cloud computing is mobility. If cloud computing is the evolved data center, mobility is the client. Perhaps the most obvious shift here is away from "fat client" PC dominance and towards simpler client devices like tablets and smartphones connecting through wireless networks using Web browsers and lightweight app store applications. This shift is increasingly changing how organizations think about providing their employees with computers, a shift that often goes
However, there's much more to the broad mobility trend than just tablets and smartphones. The "Internet of Things," a term attributed to RFID pioneer Kevin Ashton, posits a world of ubiquitous sensors that can be used to make large systems, such as the electric grid or a city, "smarter." Which is to say, able to make adjustments for efficiency or other reasons in response to changes in the environment. While this concept has long had a certain just-over-the-horizon futurist aspect, more and more devices are getting plugged into the Internet, even if the changes are sufficiently gradual that the effects aren't immediately obvious.
Mobility is also behind many of the changes in how applications are being developed -- although, especially within enterprises, there's a huge inertia to both existing software and its associated development and maintenance processes. That said, the consumer Web has created pervasive new expectations for software ease-of-use and interactivity just as public cloud services such as Amazon Web Services have created expectations of how much computing should cost. The Consumerization of Everything means smaller and more modular applications that can be more quickly developed, greater reliance on standard hosted software, and a gradual shift towards languages and frameworks supporting this type of application use and development. It's also leading to greater integration between development and IT operations, a change embodied in the "DevOps" term.
The third trend is big data. It's intimately related to the other two. Endpoint devices like smartphones and sensors create massive amounts of data. Large compute farms bring the processing power needed to make that data useful.
Gaining practical insights from the Internet's data flood is still in its infancy. Although some analysis tools such as MapReduce are well-established, even access to extremely large data sets is no guarantee that the results of the analysis will actually be useful. Even when the objective can be precisely defined in advance -- say, -- the best results often come from incrementally iterating and combining a variety of different approaches.
Big data is also leading to architectural changes in the way data is stored. NoSQL, a term which refers to a variety of caching and database technologies that complement (but don't typically replace) traditional relational database technologies, is a hot topic because it suggests approaches to dealing with very high data volumes. (Essentially, NoSQL technologies relax one or more constraints in exchange for greater throughput or other advantage. For example, when you read data, what you get back may not be the latest thing that was written.) NoSQL is interesting because so much of big data is about reading and approximations -- not absolute transactional integrity as with a stock purchase or sale transaction.
Early discussion of cloud computing focused on the public option. In fact, the economic concept of computing delivered as a sort of utility by mega service providers such as Amazon, Google, and Microsoft was at the core of the original cloud-computing concept.
As it turns out though, these public clouds are hardly the only form that cloud computing has taken.. For this and other reasons, private and hybrid clouds -- which use computers and other IT resources controlled by a single organization -- have evolved to become an important part of the landscape.
However, to date, private and hybrid takes on cloud have mostly been confined to infrastructure as a service (IaaS). With IaaS, users make self-service requests for IT resources like compute, storage, and networking. These resources are, rather than raw resources, but they still largely mimic the physical server world. You, as a developer, must still start with a base operating system and install whatever tooling and middleware hasn't been preloaded into the standard service before you can begin developing applications. This isn't much different from the system administration duties required for a physical server.
Platform as a service (PaaS) takes things to a higher level of abstraction. With PaaS, developers are presented with an environment in which the underlying software stack required to support their code is "somebody else's problem." They write in a language like Java or C# or a dynamic scripting language like PHP, Python, or Perl and the underlying libraries, middleware, compilers, or other supporting infrastructure are just there. This implies a certain loss of control in fine-tuning that underlying infrastructure; you can't tweak settings in the operating system to make your code run faster. But, for many developers who want to focus at the application level, this is a more than acceptable tradeoff.
Different PaaS platforms provide different degrees of customization and portability. At one extreme, the PaaS is limited to a single public cloud platform. At the other, custom PaaS stacks deployable both on-premise and on a variety of public cloud environments.
This second approach seems to be picking up some steam. PaaS got started as largely a pure public cloud play -- think Microsoft Azure and Google App Engine. But we're starting to see a lot more discussion about transplanting the approach into the enterprise data center.
For example, in a recent blog post, Gartner Group research director Richard Watson asks "Why would we want a private PaaS?" Apparently his clients do. He writes that "Private PaaS is overcoming existential angst, to really keep me busy in terms of client inquiries." And, in answering his own (and his clients' question) about the "why," Watson gives a good, succinct answer:
Private PaaS offers a welcome tool for enforcing platform standardization: delivering real developer agility by giving developers what they want in standard platforms. When IT infrastructure can provide a set of standard application platform templates in an automated, self-service way, they gain insight into how developers are using the approved platform set. If they make platform configuration easy and quick to use, the developers will not feel like they are being governed. Successful governance is about making the right thing also the easiest thing.
Watson makes a point that cuts to one of the core issues around cloud computing in an enterprise context. You need the self-service and fast access to resources, sure. Without that, you're really just talking about old, traditional processes that don't deliver on any cloud-computing promises -- whatever label is applied. But data privacy, security, and regulatory compliance aren't just old-school concerns. In fact, they often apply more than ever in an everything-connected-to-everything world where there no neat inside-the-firewall and outside-the-firewall divisions.
The precise way these somewhat competing demands get balanced will depend on the individual situation. For some new-style applications, a "DevOps" approach that shifts many of the things that were historically in the domain of the operations staff onto the developers. In part, as noted by Watson's Gartner colleague Cameron Haight, this is handled by "automation of the configuration and release management processes."
However, in many enterprise IT uses today, more formalized and centralized processes are still needed even in a world of self-service and platform abstraction. There's still an overlap between development and operations. Applications need to be deployed in the broader context of their entire lifecycle and the constraints within which the enterprise must operate, such as security, compliance, or data privacy. Automation and self-service can certainly be deployed. It's just that IT operations may need to maintain a more specific role (and have more specific responsibilities) in ensuring the applications run in a way that doesn't violate any regulatory rules or other aspects of IT governance. Think of this operating model as ITOps PaaS.
CAMBRIDGE, Mass.--Most discussions about where computing is and where it's going end up. Yesterday's MIT Sloan CIO Symposium was no exception, whether those precise terms were used or not.
Perhaps the most striking example of just how rapidly mobile devices are forcing IT organizations to adapt came from Scott Griffith, the CEO of Zipcar, who said that "60 percent of interactions are now through an Android or an iPhone." He also noted that essentially BlackBerry's entire share had shifted to Android over a period of about two and a half years.
Griffith also emphasized the importance of data mining to his company. As with airlines, Zipcar is largely a fixed-cost business in the short term; it therefore makes sense to provide incentives for people to use vehicles at times of low demand. Data mining is the basis for a wide range of price promotions; it's also used for marketing programs such as referrals.
The symposium also provided examples of just how far "Bring Your Own Device" (BYOD) has progressed within some organizations. For example, Frank Modruson, CIO of Accenture, said that roughly 70 percent of the mobile devices at his company are employee purchased. They "put security down on the device" but they provide no support. (They also offer the option of a standard IT-supported device for those who want one.) As large and very distributed IT services organization, Accenture is typical of the sort of company I see moving to BYOD most aggressively.
Perhaps the biggest theme of the day, however, was delivering business innovation through information technology.
Part of this involves simply not doing certain things and moving them to a public cloud provider of some sort. Steven John, the CIO of Workday, said that IT organizations should concentrate on: "What are the things that others can't do? What are the things that only we can do? Focus on business technology knowledge." John went on to observe thatwhereas, historically, customization has caused many IT failures.
It's not always possible to move to new IT structures overnight though. An academic panel moderated by Jason Pontin, the editor in chief of Technology Review, and consisting of Prof. Anant Agarwal, director MIT CSAIL; Prof. Erik Brynjolfsson, MIT Sloan; and Joichi Ito, director of MIT Media Lab, discussed ways to accelerate productivity and change. Advice included making a lot of small bets and shifting towards more data-driven and distributed decision making. The overall consensus was, in the words of Ito "You don't need to spend money any longer. Let people try everything."
However, the CIO panel that followed brought a somewhat different perspective. For example, Thomas Sanzone, SVP of Booz Allen Hamilton, asked "Why are we embarrassed by legacy? What if we have a legacy of success?" He went on to note that the challenge of anything legacy is that the cost of maintenance is far smaller than the cost of development. I've heard similar sentiments from many CIOs. Legacy may be a dirty word but it often applies to systems that have been successfully running the business for many years. The challenge is often not so much how to get rid of these legacy systems but how to introduce innovation while dealing with the legacies in whatever way is appropriate.
Accenture's Modruson also observed that IT shouldn't be just about making small bets and failing fast. He said that "If you're a CIO, you want to try new things and not spend much money but if there are big returns to be had with big investments, we'll do those too." He went on to note that planning or lack of planning really depends on what you're doing. "If you're building a highway, it's kind of important to plan."
Former IBMer Irving Wladawsky-Berger suggested to me that some of this apparent disconnect between the academic view and the CIO view really comes down to platforms vs. applications. CIOs are responsible for creating a solid IT infrastructure on which the business can be run and on which innovation can happen. I think, at some level, the panelists who were more focused on the innovation side of the equation were taking the availability of robust infrastructure as something of a given (and, to be sure, perhaps largely ignoring the reality of legacy systems.)
This is the balancing act that we see time and time again with cloud computing. One one side of the scale is the speed, the agility, the low friction of "the cloud" -- whatever the precise form it takes. On the other side are the very real concerns of, not just IT organizations, but the business -- reducing risk, protecting data, and providing reliable service.
NEW YORK--Industry consortia are pervasive. But they often don't amount to much -- a spate of press releases, a series of progressively less energetic meetings making little progress, and the eventual fade to black. And even most successful consortia tend to be about vendors cooperating on specific standards and technologies. Important, but very limited in scope.
The Open Data Center Alliance (ODCA) has been an exception. It announced in October of 2010 with a membership VM Interoperability Model defines user requirements for virtual machine interoperability in a hybrid cloud environment.. Intel has been the organizing force and is the technical advisor to the organization, but the steering committee includes marquee end users such as BMW, Deutsche Bank, Disney, Marriott, JP Morgan Chase, National Australia Bank, and UBS. The focus of the organization is "to deliver a unified voice for emerging data center and cloud computing requirements" expressed primarily in the form of usage models. For example, a
This week, about 18 months after its founding, the ODCA held its first conference in New York, Forecast 2012. Run mostly in the form of "rapid fire panels," many of the topics came down to opportunity and risk -- and how to balance the two. The two panels on which I participated were typical: one was on software innovation, the other cloud regulation.
Richard Villars of market researcher IDC moderated the software innovation panel. As Eric Mantion wrote "It should be of no surprise that the phrase 'Open Source' was mentioned several times. However, another fascinating observation was the impact that the 'Cloud' was having on how software is even distributed today.
The genesis of the latter point was an audience question about whether some of the large, traditional enterprise software vendors are holding back the adoption of cloud whether private, public, or otherwise. From my perspective, this question was backwards. Mantion notes that I raised a few eyebrows when I responded that "consumers are getting accustomed to the cloud, so if software vendors aren't embracing that, then their competition will and they will be left behind."
However, there was a perhaps surprising level of agreement on the panel given the differing roles and employers of the panelists which included proprietary software vendors, open source software vendors, and public cloud providers (and various intersections thereof). For example, notions like open source have been a fundamental enabler of public clouds and that other aspects of openness such as APIs can be critical for both innovation and user acceptance sparked little debate.
Innovation, flexibility, ease-of-use, agility, and speed are one face of cloud computing. In the form of the public cloud, they represent a new benchmark for enterprise IT. But enterprise IT (especially as embodied by the larger, more conservative end-user organizations who are representative of the ODCA membership) has concerns that are often in tension with fast and easy.
One such concern is security. Take just about any survey on inhibitors to cloud adoption and "security" is likely to lead the list. Christofer Hoff of Juniper Networks moderated a panel on the topic.
It's not that public clouds are inherently insecure compared to an in-house infrastructure. All the panelists agreed on this point. Dov Yoran, the CEO of ThreatGRID, bluntly stated: "For smaller companies, the cloud is more secure because they don't have the infrastructure in place. As a small company, it's pretty straightforward are going to get a better level of security [in public cloud] when you have a part-time security guy."
But issues remain, especially with respect to data location and other policies. Dell's Mark Wood echoed the general consensus when he said: "Cloud introduces a loss of control that we don't yet have good answers for. The really hard part as a cloud service provider is pulling out the bits but making sure you only get the bits which are important [in response to e-discover]." Ian Lamont of BMW was even blunter: "E-discovery is nightmarish in the cloud."
Similar themes carried through to the regulation panel moderated by Deborah Salons. Especially troublesome to a number of the panelists was what one described as the "balkanization" of data regulations across different countries, most notably in Europe.
Yin and yang, speed and control, are at the heart of the future of enterprise computing. Historically IT was focused on control--which worked well enough when the job of IT was relatively well-defined and bounded--although not well enough to prevent successive waves of more distributed systems. But with IT increasingly a strategic weapon in more and more industries, simplistically locking everything down is no longer an option. Going forward, the focus needs to be on bringing together the best of both worlds: the agility of the cloud as demonstrated by leading public clouds and the control needed by enterprises to meet regulatory, security, audit, and data privacy requirements.
"Big Data" promises to turn terabytes, petabytes, and exabytes (with, presumably, zettabytes and yottabytes to come) of what's often ambient digital detritus into useful results. That promise often seems to come with an implicit assumption; with enough data and the tools to crunch it, useful insights will follow. Insights that can be used to make businesses more efficient, tailor everything from medicine to advertising for individuals, and employ instrumentation and automation on larger and more complex physical systems than ever before.
For example, we're in the early days of what sometimes goes by the name of the "Internet of Things," the idea that we'll have pervasive meshes of sensors recording everything and integrated together into feedback loops that optimize the system as a whole. IBM, with rather more marketing dollars than the academics who first coined the concept, talks about this idea under an expansive "Smarter Planet" vision.
Some of this smart-systems talk leads the reality by a (long) way, to be sure. But no one really disputes that instrumentation can be used to optimize behavior at the level of an overall system. It's pretty standard command-and-control system dynamics stuff that's done all the time. The only thing that's really new is the scale of the systems, the sensor net, and the feedback controls.
There are also examples of success, even if some are incremental and tactical. Even if the Netflix prize for improving movie recommendations didn't achieve any particular breakthrough, the workaday efforts of Netflix engineersacross a number of fronts. And those improvements are both based on data and tied into improving business outcomes -- in this case, retaining subscribers. , from Obama re-election campaign e-mail targeting to Target "pregnancy prediction" scores, suggest there's at least some value in using the results of data analysis to affect consumer behavior in a specific way.
Another recent announcement is bigdata@CSAIL, which brings together the work of more than 25 MIT professors and researchers with the Intel Science and Technology Center for Big Data at CSAIL (Computer Science and Artificial Intelligence Laboratory); it will focus on areas such as finance, medicine, social media, and security.
It's hard to argue that larger volumes of data, increasingly available at nearly the instant it's generated, won't play a bigger and bigger part in any number of applications -- both for good and ill.
However, as Big Data hype accelerates, it's also useful to maintain an appropriate level of skepticism. While data can indeed lead to better results, this won't always be the case. The numbers don't always speak for themselves and sometimes the underlying science to apply data, however plentiful, in a useful way just doesn't exist.
For example, there's a widespread assumption that personalized advertising is more effective advertising. But a reader's comment on Michael Wolff's "The Facebook Fallacy" nicely summarizes why this might not be the case.
There is not now, nor is there anything on the horizon, that is a scalable, automated means of exploiting people-generated data to extract actionable marketing information and sales knowledge. A well-known dirty little secret in the advertising world is that, even after millennia of advertising efforts, not a single copywriter can tell you with any confidence beyond a coin flip whether any given advertisement is going to succeed. The entire "industry" is based on wild-assed guesses and the media equivalent of tossing noodles against the kitchen wall to see what might stick, if anything.
Peter Fader, co-director of the Wharton Customer Analytics Initiative at the University of Pennsylvania, talks of a "data fetish" that is leading to predictions of vast profits from mining data associated with online activity. However, he goes on to note that more data and data from mobile devices doesn't always lead to better results. One reason is that "there is very little real science in what we call 'data science,' and that's a big problem."
We'll only see more stories about great results being achieved by applying data to some problem in a novel way. Especially when there's solid underlying science, algorithms, and models limited only by the quality or quantity of the inputs, more and different types of data can indeed lead to impressive results and outcomes.
But this doesn't mean that bigger data will always hold the key. Sometimes data is just data -- noise, really. Not information. It doesn't matter how much you store or how hard you process it.
Much has been written about security and other headaches that employee-owned devices can cause for IT departments. Much of this hand-wringing is arguably overblown given all the products, technologies, and established best practices available to mitigate risk. Nonetheless, dealing with a wide variety of client hardware over which they have little control requires at least some level of planning and work for IT professionals.
The justification for this effort? Sometimes it's framed with productivity metrics. But, really, the usual justification is that it's happening with or without IT's acquiescence and participation. The storyline then continues on about how companies that don't get BYOD, social media, and other hot trends won't be able to hire anyone under a certain age.
But a few stories have popped up recently questioning whether the "bring your own device" movement is actually desirable. Not from the perspective of a reluctant IT department, but from the point of view of employees.
For example, over at Computerworld, Steven Vaughan-Nichols writes:
BYOD is a slippery slope. It started because we loved our tech toys and wanted to use them for work. That was great for executives who could afford to buy the latest and greatest iPad every time Apple released one. But when BYOD becomes a requirement, it's a pain for those in the upper salary brackets and a de facto cut in pay for those who don't make the big bucks.
Amy Cheah, market analyst for Infrastructure at IDC in Australia and New Zealand, told David Needle in March that "IDC's Next Generation Workspace Ecosystem research has found that only 2 out of 10 employees want to use their own device for work and for personal use, which means corporate devices are still desired by the majority."
How does one reconcile the enthusiasm for BYOD in some circles with the distaste in others?
First, it's a given that different people will have different preferences. Employees span a wide range of personal preferences, salary levels, job descriptions, and technical competence. That some prefer to just be given the tools they need to do their job and have them fixed or replaced if they stop working is hardly surprising. Company policies also differ. Some IT departments may indeed see BYOD as a means to cut out an existing cost, others as a way to give the employees who want it more flexibility.
However, I also suspect that the way we use the BYOD term today blurs an important distinction. Whatever the future may bring, in the here and now there are important differences between smartphones and tablets on the one hand and PCs on the other.
As far as smartphones are concerned, any debate over whether BYOD will or should happen is long past. People mostly buy their own phones and generally use the same one for both personal and company use. One need only look at the financial statements of BlackBerry-maker RIM to chart the decline of dedicated enterprise-optimized smartphones. The only real question is to what degree a company subsidizes monthly carrier charges.
Tablets shouldn't cause much debate either. In their current form, tablets are primarily an adjunct to a PC that can make reading, Web surfing, game playing, and other types of media consumption more natural and comfortable. Time will tell whether tablets and PCs reconverge in the coming years, but in their current form, tablets can't take the place of a PC for general business use. (Unless they're configured for some dedicated task.) Thus, though many employees do indeed want to connect their tablets to corporate e-mail and networks, they're doing so as additional devices--not substitutes for something currently supplied by an employer.
Smartphones and tablets also have in common that they can be thought of as cloud clients. They don't store much data. They synchronize to online backups (or a PC). They're pretty simple to use insofar as they mostly work or they don't work.
PCs are different.
They can store a lot of files and other data, which will be all mixed together unless special care is taken to isolate personal files from employer files. A variety of products that use virtual machines and other technologies can provide isolation within a single PC for different types of use. However, none of these products has gone mainstream and, for many users, such approaches seem too intrusive for a personal system. Thus, a PC used for work is arguably not truly personal any longer if a company has, for example, some legal reason to examine stored files.
With more and more applications sporting Web interfaces rather than requiring dedicated client software that has to be installed on individual PCs, it certainly becomes more practical for employees to use their own PCs for company work. And for some, that will be their preference whether because they want a particular type of laptop or simply because what they do personally and what they do professionally is so mixed together anyway. This requires following proper security practices, backup procedures, and being comfortable doing your own tech support. But it can be a reasonable trade-off, all the more so if the company is willing to provide some sort of stipend in lieu of supplying a PC.
However, I'm skeptical that it makes sense in most cases to have an all-encompassing BYOPC program. Many people still find PCs (and, yes, I include Macs) to be sometimes confounding and frustrating pieces of gear that develop subtle and hard-to-debug problems. The same people may have difficulty following IT security policies. Ultimately, there are still enough complexities with PCs that it's just not practical for IT to get completely away from supporting clients in most environments.
"There's that pesky speed of light." That cautionary remark was offered by Lee Ziliak of Verizon Data Services, speaking on a panel at the 451 Group's Hosting and Cloud Transformation Summit last week. The context was that hybrid cloud environments may logically appear as something homogeneous, but application architectures need to take the underlying physical reality into account.
Latency, the time it takes to move data from one location to another, often gets overlooked in performance discussions. There's long been a general bias toward emphasizing the amount of data rather than the time it takes to move even a small chunk. Historically, this was reflected in the prominence of bandwidth numbers -- essentially the size of data pipes, rather than their speed.
As I wrote back in 2002, system and networking specs rate computer performance according to bandwidth and clock speed, the IT equivalents of just measuring the width of a road and a vehicle engine's revolutions per minute. While they may be interesting, even important, data points, they're hardly the complete story. Latency is the time that elapses between a request for data and its delivery. It is the sum of the delays each component adds in processing a request. Since it applies to every byte or packet that travels through a system, latency is at least as important as bandwidth, a much-quoted spec whose importance is overrated. High bandwidth just means having a wide, smooth road instead of a bumpy country lane. Latency is the difference between driving it in an old pickup or a Formula One racer.
The genesis of that decade-ago research note was rooted in the performance of "Big Iron" Unix servers and tightly coupled clusters of same. At the time, large systems were increasingly being designed using an approach which connected together (typically) four-processor building blocks into a larger symmetrical multiprocessing system using some form of coherent memory connection. These modular architectures had a number of advantages, not least of which was that they made possible upgrades that were much more incremental. (In a more traditional system architecture, much of the interconnect hardware and other costly components had to be present even in entry-level systems.)
The downside of modularity is that, relative to monolithic designs, it tends to result in longer access times for memory that wasn't in the local building block. As a result, the performance of these Non-Uniform Memory Access (NUMA) systems depended a lot on keeping data close to the processor doing the computing. As NUMA principles crept into even mainstream processor designs -- even today's basic x86 two-processor motherboard is NUMA to some degree -- operating systems evolved to keep data affined with associated processes.
However, while software optimizations have certainly helped, the biggest reason that NUMA designs have been able to become so general purpose and widespread is that modern implementations aren't especially nonuniform. Early commercial NUMA servers running Unix from Data General and Sequent had local-remote memory access ratios of about 10:1. The differences in memory access in modern servers -- even large ones -- is more like 2:1 or even less.
However, as we start talking about computing taking place over a wider network of connections, the ratio can be much higher. More than once over the past decade, I've gotten pitches for various forms of distributed symmetrical multiprocessing systems that were intriguing -- but which rested on the assumption that long access times for data far away from where it was being processed could be mitigated somehow. . The problem is that, for many types of computation, synchronizing results tends to make performance more in line with the slowest access than the fastest access. Just because we make it possible to treat a distributed set of computing resources as a single pool of shared memory, doesn't mean that it will necessarily perform like we expect it to when we load up an operating system and run a program.
This lesson is highly relevant to cloud computing.
By design, a hybrid cloud can be used to abstract away details of underlying physical resources such as their location. Abstraction can be advantageous; we do it in IT all the time as a way to mask complexity. Indeed, in many respects, the history of computer technology is the history of adding abstractions. The difficulty with abstractions is that aspects of the complexity being hidden can be relevant to what's running on top. Such as where data is stored relative to where it is processed.
Two factors accentuate the potential problem.
The first is that a hybrid cloud can include both on-premise and public cloud resources. There's a huge difference between how much data can be transferred and how quickly it can be accessed over an internal data center network relative to the external public network. Orders of magnitude difference.
The second is that, with the growing interest in what's often called "Big Data," we're potentially talking about huge data volumes being used for analysis and simulation.
All of this points to the need for policy mechanisms in hybrid clouds that control workload and data placement. Policy controls are needed for many reasons in a hybrid cloud. Data privacy and other regulations may limit where data can legally be stored. Storage in different locations will cost different amounts. Fundamentally, the ability of administrators to set policies is what makes it possible for organizations to build clouds out of heterogeneous resources while maintaining IT control.
How applications and their data need to relate to each other will depend on many details. How much data is there? Can the data be preprocessed in some way? Is the data being changed or mostly just read? However, as a general principle, processing is best kept physically near the data that it's processing. In other words, if the data being analyzed is being gathered on-premise, that's probably where the processing should be done as well.
If this seems obvious, perhaps it should be. But it's easy to fall into the trap of thinking that, if differences can be abstracted away, those differences no longer matter. Latencies can be one of those differences -- whether in computer system design or in a hybrid cloud.