I Got a Summer Job at the Beach

I have gotten myself a pretty good summer job down here. I am working at a little amusement park near Whitecap Beach in Corpus Christi. It is not so bad of a deal, especially since I am going to be sharing a place on the beach with five other guys. I had been looking at some fairly nice Corpus Christi apartments, but it became more and more clear that this was just not a very practical idea. At least it would not have been anything that would have worked out if I had gotten a single bedroom place. That would have been at least eight hundred and fifty dollars a month. Continue Reading…

How to Update Your Graphics Drivers

There is no question that, in this day and age, computers have become a necessity in our society. From simple doodling and homework to complex business strategies and massive storage of private and government data, computers have indeed been an indispensable partner in modern life. However, the use of computers is not limited for “serious” business. For other people, it’s all about fun and games.

Every computer owner has, in one way or another, played a game or two in their respective units. Whether its single player games like Plants vs. Zombies™ and Angry Birds™ or the more competitive massively multiplayer role-playing game such as Defense of the Ancients (DotA)™ and Counterstrike™, every computer user has inevitably become a ‘gamer’ in their own geeky ways. Unfortunately, leisurely activities such as these take their toll on your computer – specifically your graphics driver.

In technical terms, a graphics driver contains a series of mathematical instructions that activates the computer’s graphics device, usually the display screen. They are used from sound devices, to gaming, to kitchen modeling. It uses the PCI Express interface and is responsible for determining the maximum resolution for your display along with the number of colors that can be displayed. Normally, this driver is already integrated into your motherboard when you purchased a computer. Like other Windows Drivers, a graphics driver benefits from updating rather than by leaving it as it is for long periods of time.

For those who uses their computers solely for gaming purposes, the decision on whether to update your graphics driver usually depends on the game’s developer. In order to maximize the hardware, most video card manufacturers collaborate with various game developers to release driver updates that would ultimately improve your display’s performance. You can achieve this by following the steps listed below in order to update your graphics driver:

Create a system restore point. Don’t do anything until you have created a backup plan, just in case the update does not yield the results you are hoping for. A system restore point allows your operating system to roll back to its previous state – before you have installed or updated anything.

You can do this by right-clicking “My Computer” then “Properties”. This will  open the Control Panel Tab. Click “System Protection” then go to “System Restore”. This will automatically create a restore point for you.

Find out what type of video card you are using and get its manufacturer’s name. You can do this by right-clicking your Desktop and selecting “Properties”. This will open the Display Properties Tab. Go to the “Settings” Tab, you can find out the chipset type and the video card’s manufacturer on the Display line.

Visit the manufacturer’s website to get the latest drivers. Most video card manufacturers have a download site in which you can download and install updated drivers. Just follow their instructions and you’re all set. Of course, it is not unheard of that the drivers offered by the manufacturer is not supported by your operating system. If that is the case, then it is better if you visit your unit’s website – that is, if you are using a Dell™ system then you must download the latest drivers from their website. The same goes for other computer brands.

After updating and restarting your computer, the display may revert back to its default low-resolution display. Fret not. Just adjust your screen settings in the Display Properties Tab.

Even if you are not using your computers excessively for gaming or gardening configuration, you can benefit in updating your graphics driver since it not only improves performance but also fixes any bugs and glitches that were present in your system.

So what are you waiting for? Update your graphics driver and let the games begin!

Getting Acquainted with Your Computer: What are Windows Drivers?

In this day and age, the only individual who has never even seen nor touch a computer might as well live under the proverbial rock. Even those involved in the legal and medical fields have embraced modern technology. As technology progresses, so does our society’s increasing dependency on computers. From small businesses to large industries, computers and computer-like technologies have inevitably become the heart and soul of operations.

A computer is technically an electronic device that accepts, stores and manipulates digitized information based on a program or a sequence of mathematical programs at a very high speed in order to produce the necessary results that is vital for its user. Since computers play an important part in our lives, it is only practical that we should familiarize ourselves with some of its most basic components.

Computers are mostly composed of software and hardware. Software is the common term for the computer programs that directs the operation of a computer. Hardware, on the other hand, is the generic term for all the computer’s physical parts such as the motherboard, storage controllers, mass storage drives (hard disk, CD-ROM, etc.), and other peripheral devices such as printers, scanners and webcams. Softwares and hardwares are literally connected with the use of drivers. English tutors in particular have enjoyed and utilized the power of computing.

A Windows Driver acts as a conduit and controller between a particular software and a specific hardware. It acts as a translator between devices and helps them to communicate with each other through the use of specialized commands unique to that driver and is often bundled as a dynamic link library or a DLL file. It is not unusual for one to buy a computer with all the pertinent drivers already installed in the unit. However, new devices would either mean new drivers or an upgraded version of an existing driver in order to convert the generic input/output instructions of that new device to a “language” that an operating system can understand. Simply put, without Windows drivers, your hardware will not function properly or will stop working altogether.

As mentioned earlier, most Windows Drivers are already available on your operating system when you purchased a computer. However, some drivers may need either a disc or a visit to the manufacturer’s website in order to be downloaded and installed. Furthermore, most drivers are subject to errors and glitches which can only be corrected by updating them using a program called Windows Update. Most manufacturers update their hardware drivers more than once a year so it is critical that one comprehends the drivers that needed to be updated. An update gives the driver a chance to install new features made available by its manufacturer and at the same time, find solutions to recurring hardware problems.

Most computer problems are a result of a faulty driver. More often than not, various computer error codes are products of (1) driver not installed properly, (2) wrong driver is installed and (3) driver is not updated. It is also important to take your operating system’s compatibility with a particular Windows Driver into consideration. And remember: Always read the instructions provided by the driver’s manufacturer. Don’t let the fine print force you to skip reading. As a wise man once said, the devil is in the details – understand the instructions provided and the system requirements before you click the “Run” button. A faulty or wrong driver can actually damage your computer beyond repair.

Familiarizing yourself with your computer’s intimate parts can help you, only understand its intricate mechanisms, but can also help you in maintaining your devices optimum performance.

It’s official: The Microsoft 2.0 era has begun

Chairman Bill Gates’ last day in the office as a full-time Microsoft employee has come and gone. (It was Friday June 27.)

At the Friday Town Hall meeting for Microsoft employees, Gates shared a few parting sentiments — and, along with CEO Steve Ballmer — shed a few tears. (Microsoft blogger Steven Bink has actual video footage of the Town Hall meeting and the tears — see the clips starting with “Ballmer — Changing the World.”)

There has been so much coverage of Gates and his legacy over the past week-plus on TV and radio, in newspapers, magazines and blogs, that it’s impossible to provide a full list. Here are a few of the many clips. (And yes, I am favoring clips where yours truly and the Microsoft 2.0 book got a shout out.)

The Economist: After Bill

ABC News: Geek Goliath: Gates Says G’Bye to Microsoft

National Public Radio: Gates Retires from Daily Role at Microsoft

Investor’s Business Daily: Curtain Call: Gates Exits Main Microsoft Stage

Gizmodo: Bill Gates Retirement Party (review of Microsoft 2.0)

Wired: The Many (Geeky) Faces of Bill Gates

Reuters: Life After Gates

Reuters: Ballmer becomes lone voice at Microsoft’s helm

Meanwhile, here are the internal e-mail messages that Gates and Ballmer sent to the Microsoft troops on June 27:

From: Bill Gates
Sent: Friday, June 27, 2008 10:40 AM
To: Microsoft – All Employees (QBDG)
Subject: My last full-time day at Microsoft
I want to share some thoughts on my last day as a full-time Microsoft employee.

For the last 33 years, I’ve had the ideal job. It’s been incredibly exciting to come here every day to work with the smartest people in the world to develop breakthrough software. Together, we have built a great company that has profoundly changed the world for the better.

After today, I will be shifting my full-time focus to the work of the Bill and Melinda Gates Foundation while keeping a strong connection to Microsoft.

The fact that I am making a career change does not mean that our work at Microsoft is done. In fact, the most exciting impact of our software is still ahead of us. Everything we have done up to now is just the foundation for the more dramatic breakthroughs to come. As you apply the magic of software to delivering a new generation of innovations, this company will continue to transform the way people communicate, create, and share experiences.

Microsoft is in an incredible position because we have momentum and a great pipeline of products and technologies. Even more important, we have great people at every level. In research and development we have great engineers focused on solving the most pressing challenges in computer science and turning new ideas into innovative products. In marketing, sales, and customer service, our world-class organizations keep getting better.

We also have strong leadership. As Microsoft has grown, one of the most exciting and fulfilling things for me has been to watch new leaders develop.

It starts at the very top. For the last 28 years, I have loved working side by side with Steve. Even now after all these years I am regularly impressed with his energy and insight. I think he and I have enjoyed one of the great business partnerships of all time. Steve has done a great job leading the company since he became CEO in 2000. Steve’s passion for democratizing the power of technology and inspiring customers, partners, and employees will keep us driving ahead.

I am thrilled to have Ray and Craig playing key roles in guiding the company’s strategy. For over a decade I had hoped that we could convince Ray to join Microsoft—and in the three years he has been here, he has made a huge difference in helping us focus on the challenge and opportunity of software plus services. I have worked with Craig for more than 15 years. His ability to anticipate the future direction of technology is a key asset, as is his deep interest in and understanding of emerging markets.

Of course, I’ll continue to be involved in the work of the company as part-time Chairman. As part of this I will help with a handful of projects that Steve, Ray, and Craig select.

As I reflect on the last three decades, the thing I’m proudest of is the role that this company has played in making the power of digital technology accessible and affordable. Software running on personal computers and other devices is the best tool for empowerment in human history. Microsoft founded the personal computer software business and we built the platforms that enabled the software industry to develop. Without your contributions, we would not have succeeded in making our dream of a computer on every desk and in every home a reality for more than 1 billion people worldwide—a dream we will extend to everyone in the future.

As I make the transition to focus more of my time and energy on the Gates Foundation, I am looking forward to applying the lessons I’ve learned—and, in some cases, the technologies that we have developed—to help address some of the critical issues that people around the world face in education, economic development, and health.

I want to thank all of you for your hard work and your dedication. It has been a privilege and an inspiration to come to Microsoft every day. I look forward to the amazing, world-changing innovations you will deliver in the years ahead as you continue the great work this company has always done.
Bill

From: Steve Ballmer
Sent: Friday, June 27, 2008 11:40 AM
To: Microsoft – All Employees (QBDG)
Subject: Bill’s Transition
I just wanted to add a few thoughts to Bill’s mail.

For the last 28 years, it has been my pleasure and privilege to work side-by-side every day with Bill to help build this great company. Of course it would be impossible to overstate Bill’s contribution as a technology visionary and business leader. Thanks to his vision and insight, Microsoft has delivered incredible innovations that enable people to achieve things every day that once seemed impossible.

For so many of us Bill has been a mentor, a colleague, and an inspiration. He has challenged us to do our very best work, and it has been an honor to try to live up to his expectations. For my part, I’ve had the incredible good fortune to have Bill as a great friend and a wonderful business partner. I’m grateful for the opportunity he gave me when he convinced me to drop out of business school to join the company. I can’t imagine a better way to have spent the last 28 years.

As much as I’ll miss Bill’s day to day presence here, I’m excited and deeply inspired by the step he is taking. The impact he will have on world health and education as he shifts his focus to the Bill & Melinda Gates Foundation will be amazing. Bill’s passion for empowering people and his commitment to making the world a better place have always been among his most important defining traits—this transition is a logical and important next step forward for him.

At the same time, we will continue to do amazing things here at Microsoft. Working together, we have created an incredible culture of innovation and accomplishment that will provide the foundation for future breakthroughs and even greater success.

During the last three decades, building on Bill’s insights about software and computing, we’ve transformed the way businesses operate and revolutionized the way people communicate, share information, and access entertainment. We helped create an industry that provides jobs for millions of people around the globe. We have delivered tools that have changed hundreds of millions of people’s lives for the better.

But for all we’ve achieved, I believe we’re just getting started. We’re in the midst of one of the most exciting periods in the history of this industry. Computing continues to become more powerful, more portable, and more affordable. Content, communications, and media are shifting entirely to digital formats. The combination of software plus services is transforming the way we create and deliver computing experiences. Online social networks are changing how people interact with each other. Gestures, voice, and other natural user interface capabilities are changing the way people interact with computers. New tools for developers are making it easier to take advantage of multicore processors and to deliver rich, connected experiences.

These trends are creating incredible new opportunities in our industry. And no company is better positioned to take advantage of these opportunities than Microsoft. No company has the tradition of innovation that we do, or the expertise across such a wide range of technologies. No company can match the breadth of our talent or the depth of our leadership.

Today, I believe that we’re poised to deliver a new generation of innovations that will have an even greater impact on people’s lives. That’s what this company has always really been about—finding new ways to use technology to make the world a better place. We’ve done that for a billion people during the last 30 years, and we’ll do it for billions more during the next 30 years.

This is not to say that we don’t face difficult challenges. But in the past, we have always done our best work when our job was to tackle the most pressing challenges. I’m absolutely confident this will be true as we move forward.

There’s no doubt that Bill’s last day as a fulltime Microsoft employee marks an important milestone in the company’s history. But the truth is that not much will change. Bill will continue to play a key role as Chairman of the company. And we’ll all continue to work together to deliver innovative products and services that improve people’s lives and create new opportunities for Microsoft, our customers, and our partners.

I want to close by extending a heartfelt thank you to Bill for his friendship, his partnership, his insight, and his inspiration.

Bill — It has been an amazing and wonderful 28 years for me, and I know that everyone at this company shares my respect and admiration for what you have achieved. We look forward to seeing what you do in the next phase of your career as we continue to build the great company that you launched more than three decades ago.

Steve

Can Microsoft learn to innovate?

Microsoft execs don’t miss any opportunity to claim that Microsoft is one of the biggest innovators in the tech world. Officials routinely cite Microsoft’s multi-billion annual research and development spending as proof that Microsoft is an Innovator (with a capital “I”).

As I note in Microsoft 2.0, R&D spending doesn’t necessarily translate to more/better innovations. Plus, many of the “innovations” to which the Softies point aren’t seen by the rest of the industry as anything to write home about. And Microsoft Chairman Bill Gates has thrown his weight behind more than one concept (voice/vision-centric input; SPOT watches; Tablet PCs, Surface multi-touch tabletops) that haven’t panned out so well.

Privately, some Softies acknowledge that Microsoft needs to find new and innovative ways to innovate. In my book, I touched on some of the incubators, greenhouses and other new business ventures, products and initiatives Microsoft is testing as possible new innovation channels. A few of these had yet to go public by the time I submitted my book manuscript. But now there is more public info on some of them, specifically:

Microsoft’s Live Experimentation platform (ExP). The ExP team describes the platform as something that “enables product groups at Microsoft and later on will enable developers using Windows Live to innovate using controlled experiments with live users.”

Officelabs. A year ago, I was hearing talk about a new incubator in the Microsoft Business Division that was trying to become more agile and open. It took Microsoft until late April 2008 to acknowledge publicly the existence of “Office Labs.” One of the first Office Labs projects to see the official light of day is ”Search Commands,” a tool that was codenamed “Scout” — an add-in for navigating more easily Office 2007’s new Ribbon interface.

There are still other as-yet-unannounced innovation projects at Microsoft. Stay tuned for more….

My opinion? Microsoft is already innovative. There are a multitude of changes under the hood of Vista. Memory management changes, Aero, improved plug and play, 64bit (yes, it’s the only complete 64bit system on the market now). Past innovations? Plug ‘n Play with with Win95 which got better and better with later releases, and their developer tools are arguably the best.

So where does MS miss the boat? Why is Vista not as appealing as OSX???

Microsoft had not Innovated where it counts the most! User Xperience! Apple dominates here. You only need to see an grandmother having a great time with her iPod, iPhoto and Canon Camera to understand that. Apples are simply easy-to-USE and easy-to-love!

Windows is still quite CONFUSING and INACCESSIBLE to a lot of people. You only need to access your control panels, try to change your wallpaper, try to add a new user, or try to understand where the great disconnect lies.

So Microsoft, Pssst! Here’s what you need to do. Stop working on geeky innovations that are so under the hood that I will never see them and make it easy for me to use and manage my system. Have you seen Apple’s Time Machine? Make my windows backup experience that easy.

AQUA is cool, but did you really add a 3d interface to simply mimick XP/2000 with a dash of transparency???? That is soooo dull compared to AERO or XGL on Linux. Come on!!! Opening and closing my windows on Vista should IMPRESS me and make me say ‘COOOL!’

A lot of people rightly ask, where is the WOW.

a. Self cleaning toilet – Innovation
b. Space Rocket – Innovation

MS needs more Rockets and less self cleaning toilets

Yahoo: All that hedging for nothing

As I mentioned in the conclusion of Microsoft 2.0, I had just submitted the final version of my book manuscript a week before Microsoft announced its $44 billion bid to buy Yahoo.

Disbelief was followed by utter despair — and not just on Yahoo CEO Jerry Yang’s part. All I could think on February 1 was I was going to have to go back and revise every single one of my 300-plus pages.

I did go back in and update my chapters to reflect the possibility Microsoft might end up buying Yahoo. Then I revised again to say Microsoft did buy Yahoo (given that much of the press in February made it sound like it was pretty much a done deal). Right before my drop-dead go-to-printer date, I revised one last time, saying that Microsoft might or might not buy Yahoo.

Well, as we now know, on May 3, Microsoft withdrew its takeover bid, after being unwilling to meet the higher per-share price that the Yahoo board was demanding.

As I noted in the book, if Microsoft had bought Yahoo, it would have taken the companies years to integrate. While Microsoft officials were predicting an almost immediate impact on their shared online services/online advertising strategies, few outside observers believed that the buy would result in any immediate changes — in Microsoft’s Online Services Business or any other parts of the Redmond software maker.

I think Silicon Alley Insider Henry Blodget had the best analysis of why a Microsoft-Yahoo combination would take forever (if ever) to begin to gel. Based on comments from many Softies, Yahoos and industry/market watchers, Microsoft’s ultimate failure to buy Yahoo may have been the best thing that could have happened to MIcrosoft, for a variety of reasons. The dissolution of the deal does beg the question, of course, of what Microsoft now plans to do to build its online ad inventory (and search market share) — the primary reason Ballmer & Co. said they wanted the deal in the first place.

Will Microsoft swoop back in later this year and try to buy Yahoo again? Will the Redmondians buy another online-advertising player instead? Will Microsoft do the seemingly unthinkable and completely withdraw from the online advertising business? Stay tuned….

Think Week Paper: Edge Computing Network

Twice a year, Microsoft Chairman Bill Gates was known for going off on sequestered “Think Weeks” to read papers submitted by Microsoft employees with ideas for new products and technologies which they believed Microsoft should be considering, going forward.

In the early Microsoft days, these papers were secret. But in the middle of this decade, Microsoft began sharing these Think Week papers publicly inside the company, allowing employees to comment on them and to see Gates’ and other key Microsoft executives’ comments on them.

One of these papers, shared with me by a source who requested anonymity, provided a good sense of some of the “cloud-computing ” infrastructural issues with which Microsoft has been — and needs to be — grappling.

Because Microsoft is spending so much on building out its datacenters and people in the online business space in order to gird for the Web 2.0 and 3.0 battles, the issues described in this paper are especially interesting. And there are some hints about the still-under-wraps Microsoft CloudDB and Blue technologies that are also rather intriguing.

An Edge Computing Network for MSN and Windows Live Services
Author: Jason Zions
Date:  12/15/2006

Abstract

Microsoft’s online properties create and monetize rich content and innovative end-user experiences. To meet their business objectives while providing the quality of service needed to attract and delight an audience, they must overcome a variety of technical and operational challenges. Many of these challenges arise from the current architecture of the properties themselves and from limitations of the infrastructure which supports them.

The Edge Computing Network (ECN) extends Microsoft’s existing core network and data center infrastructure with intelligent computing nodes at the “edge” of the network cloud. This distributed computing network provides a set of net¬work, com¬puting, storage and management resources and services closer end users.  The ECN goes beyond traditional Content Distribution Networks to enable a wider range of application architect¬ures that offer improvements to performance and robust¬ness and reduction or elimination of some operational challenges.

This document identifies the challenges faced by Microsoft’s on-line properties (as well as some problems created by current implementations), lays out the vision for the Edge Com¬puting Network, describes the progress already made towards achieving the vision, and works through two scenarios showing how the ECN could be exploited by a property.

Challenges Faced by On-Line Properties

Microsoft has roughly 150 on-line properties covering a tremendous range of scale, customer base, and functionality. Despite that huge variation, they each face a fairly consis¬tent set of challenges that fall into a few broad categories.

Network Challenges

Any internet-facing service has to deal with network latency, which degrades the usability of the service. The larger the latency, the worse the problem; the usability impact is mul¬tiplied by the number of round trips. Common web site development can results in tens or even hundreds of round-trips to display a fairly complex page; each separate graphical element gets retrieved independently. Various technologies have been created to deal with this. Content Distribution Networks (CDNs) like Akamai and Savvis sell distributed small-object caching services, relying on a global network of points-of-presence (POPs) housing web servers which serve static graphical elements. These POPs are located so as to have much lower latency (with respect to the end user’s browser) than that of the owner of the actual on-line service.

Similar problems arise for streaming video, although the problem is due more to packet loss and variation in packet delivery times than in pure latency. Streaming as well as straight downloading of large files raise issues of bandwidth provisioning; serving many downloads and streams from a single origin would require very large (and very expensive) egress from that origin to the Internet, and the Internet’s backbone itself isn’t growing in capacity as fast as Internet traffic overall is growing. CDNs sell services which solve those issues as well, serving large files and streams from many distributed POPs around the world.

The past few years have seen the rise of Distributed Denial of Service (DDoS) attacks, in which a large army of zombie attacking nodes attempt to overwhelm a single service. While usage of CDNs can protect static content from DDoS attacks, a single origin hosting dynamic content is still vulnerable.

Operational Challenges

Properties have to acquire and provision server hardware to host their services, and they must size their resources to match their expected peak load. Any time the load on a service is below the servable peak, money is being wasted, both in the form of under¬utilized capital equipment and in the form of power to run and cool unneeded servers. Worse yet, pro¬per¬ties must continue to accurately predict their peak usage as they grow; insufficient capacity for peak load results in slow or interrupted service, reducing customer satisfaction and leading to defection of users to competing services.

Since properties acquire their own servers, they typically attempt to optimize their choice of equipment for their specific application. This means that, across all of Microsoft’s on-line properties, there is only limited commonality of hardware. Excess servers cannot be easily repurposed to other properties. Also, the cost of maintaining large numbers of servers scales linearly with the number of SKUs; we can’t fully leverage the sublinear scale of cost versus the total number of servers. Finally, OS deployment onto such a large variety of ser¬vers is quite complex, introducing still more cost as well as greater risk of misconfiguring some systems or missing a patch. Rigorous standardization of servers is of only limited help; specific hardware models eventually leave production lose support from vendors, and new models must therefore be introduced.

Each property is responsible for its own monitoring, reporting, and logging systems. Most early-stage or small properties can’t afford to invest significant time or money in auto¬ma¬tion; these tasks are handled in a more expensive ad hoc manner. There are some projects under way to build common tools in this space (e.g. MAGIK) , but these tools are designed for the needs of the largest properties (particularly Live Search and Hotmail) and are too inflexible to meet the varied needs perceived by the great majority of properties.

Many existing properties have centralized architectures that cannot be easily geo-distri¬buted. As a result, there are limits to their growth based on the pace at which Microsoft can build sufficiently large datacenters to hold their infrastructure. These limited architec¬tures made sense early in the life of the property when servers were hosted in only one datacen¬ter; the simplifying assumptions made possible by that design are pervasive through¬out the code, though, making the overall implementation inflexible. Some assump¬tions render the implementation highly fragile, e.g. those related to multicast architectures or round-trip time to access various elements of the pro¬perty. For some properties, capacity growth can be achieved only by adding new racks of servers physically adjacent to already deployed servers. These kinds of constraints make consolidated manage¬ment of datacenter space extremely difficult; smaller properties are often “bumped” from datacenter to datacenter, sometimes repeatedly, to allow larger, monolithic/fragile pro¬perties to expand.

CDN as Partial Solution

CDNs can be used to solve some, but not all, of the network challenges described above. Some properties use one or more services from various CDN providers to address the specific challenges they face. Very few properties leverage the full suite of available CDN services, and many properties use none at all. Various factors play into these choices.

CDN services are not inexpensive; Microsoft spent about $40 million on CDN services in FY06. Projections of future growth (based on expected growth in the number of properties, amount of traffic, and usage of CDN services) show this growing to more than $130 million in FY11.

CDNs can only be used to handle static content. While there is some limited capability for hosting application code remotely via a CDN (e.g. Hotmail’s “AATE” component on Savvis), there are significant drawbacks: cost is significant, capacity is limited, management and deployment tools are primitive, and Personally Identifiable Information (PII) cannot be used there.

All CDNs provide a “traffic manager” or “global load balancer” (GLB) service which directs user requests to the location most appropriate for serving the request. The GLB services provided by the various CDNs are limited in sophistication; because of their general-purpose nature, they cannot take into account application-specific quality-of-service needs, and they cannot route traffic based on Microsoft-specific business logic (e.g. from which locations can Microsoft serve this traffic without paying for bandwidth, based on our current peering relationships, time of day, link utilization, etc.).

The logs created by CDNs are difficult to access for specific purposes. Raw billing data is expensive to retrieve and process. While Microsoft can meet its regulatory and forensic needs in cooperation with CDN vendors, the process is sometimes quite complex.

Edge Computing Network Vision

Partial solutions can only get us so far; Microsoft’s properties need an integrated and complete solution to their technical and business challenges. The Edge Computing Network is one such complete solution.
The ECN vision is based on the following four assertions:

1. Quality of Service (QoS) and Scalability are critical to the success of Microsoft’s online properties.

2. When delivering global online services/applications, centralized architectures are in¬adequate because they lead to poor QoS and do not scale well.

3. For the same purpose, distributed architectures can achieve superior QoS and scalability.

4. Centralized technologies and the people who are familiar with them are plentiful whereas distributed technologies and the people who know them are few.

The Edge Computing Network will contribute to the success of Microsoft’s various online properties by enabling properties to make use of a comprehensive set of optimized and easy-to-use distributed computing services and resources. Properties can focus on de¬li¬vering compelling end-user experiences and addressing their business priorities without having to make the daunting choice between spending unsustainable amounts of money to compensate for architectural limitations of the centralized approach or developing and operating a proprietary distributed computing network.

The Edge Computing Network comprises roughly 24 nodes distributed worldwide. Most Internet users will be no more than 20 msec roundtrip time away from at least one node. Each node provides traditional CDN services and also provides distributed computation and storage services capable of hosting elements of Microsoft’s own on-line properties. Each node would have egress capacity to the Internet end-users in the region. The nodes would be connected to each other and to existing Microsoft data¬centers by a network overlaid on the Internet and on private links leased or owned by Microsoft.

CDN-like Services

Based on the service needs of properties desiring access to the customers within a region, the ECN node serving that region would be provisioned so as to provide an appropriate subset (and capacity) of these services:

• Small object caching
• Large file downloads
• Media streaming
• Peer-to-peer file transfer
• Smart (business-rule and load aware) traffic management and load balancing
• Traffic and user analytic data
• Logging to support billing, regulatory compliance and forensic demands
• Monitoring and management of services

By providing these services in-house, Microsoft can extend and enhance these services beyond what is possible through external CDNs. Properties can more effectively use these ser¬vices when when developers have visibility into the details of their actual functioning. For example, we can do a much better job of distributing downloadable content around the world in res¬ponse to sudden changes in demand patterns. Traffic management can take into account information which Microsoft is unwilling to divulge to third parties. Our control of the soft¬ware we ship can permit us to build P2P distribution mechanisms so they improve per¬formance for end users while reducing costs to Internet Service Providers and to Microsoft.

More importantly, Microsoft can control the exact “footprint” of our network, citing nodes in locations which are most important to our business and using our bargaining power and relationships to do so in the most cost-effective manner.

Many of these goals could be achieved in partnership with a third-party CDN; however, that CDN would be free to sell those same enhanced services to others, including our com¬peti¬tors. Our intellectual property would be used to the benefit of the very companies over which we seek to build competitive advantage.

Microsoft already has considerable intellectual property in this space; we hold a substantial portfolio of patents across these technologies. While we could implement all of these ser¬vices from scratch, the most cost-effective way to get these capabilities deployed and working to our advantage is to acquire existing, functioning technology from one or more CDN providers.

Distributed Computation and Storage

Based on the service needs of properties desiring access to the customers within a region, the ECN node serving that region would be provisioned so as to provide an appropriate capacity of these services:

• Application containers
• BLOB storage (replicated and local)
• Transaction-oriented (database) storage (replicated and local)
• Distributed file system
• Logging to support billing and diagnostics
• Monitoring and management of services

An application container is an isolated environment for running application code. Con¬tain¬ers come in a single “size”; that is, each container exposes to the code it hosts a specific amount of physical and virtual memory, a single processor of defined speed, and a maxi¬mum amount of internet egress. Property developers construct elements of their applica¬tions to fit those containers. Scale of service for a property is achieve through running code in the desired number of containers, rather than through increasing the capacity of a single con¬tainer; that is, capacity is provided in a scale-out, rather than scale-up, fashion.

Each container is isolated from other containers running on a given physical system and from the host OS which runs directly on the physical system. Applications are unaware of the precise mechanism used to provide this abstraction or virtualization. The provisioning infrastructure is capable of starting or stopping instances of a particular computational element based on a variety of criteria:

• Scheduled by time-of-day or time-of-year
• On demand of operations staff
• Automatically in response to measured load on already-running instances
• Automatically in response to load on other nodes

Scheduled instance provisioning is the easiest to provide but still provides property own¬ers with solid tools to control quality of service versus the cost to provide that service. Financial properties (e.g. MS Money) would schedule increased capacity during end-of-month, end-of-quarter, and end-of-year periods as well as tax-preparation periods; this could easily be adjusted to match regional fiscal reporting requirements (e.g. April 1-15 in the US but April 16-30 in Canada). Operations staff could easily override these schedules to meet unusual demand.

More interestingly, the provisioning infrastructure could monitor some property-defined load measure and dynamically adjust the number of instances of distributed elements of that property to keep the load within a specified range. Code would compute that load fac¬tor so that it accurately reflected the quality of service provided to the end user. De¬pending on the nature of the property, live code could be run on the end-user’s system which meas¬ured actual elapsed time for various operations rather than simply relying on brute-force measures of round-trip times; these real time values would be combined with server queue lengths, memory pressure measures, etc. to help determine whether ad¬ditional instances would improve subpar response times or to decide that the property is over-provisioned and can release one or more instances.

Automating the balancing of service between ECN nodes is a key element in providing pro¬tection against DDoS attacks. By integrating this automated balancing with the routing ca¬pability provided by the Traffic Manager, the ECN will be capable of providing service at “nearby” nodes in the event a single node is under attack. The provision of additional serv¬ers and addresses for a service will dilute the impact of the attack; the fact that the addi¬tional addresses are more distant (network-wise) from the attackers will decrease the rate of the attack as well.

Storage in the ECN node comes in a variety of types reflecting the varying needs of app¬lications. All storage usage would be limited by per-property quotas and tracked for billing. Application code (exe and dll files, etc.) need to reside someplace visible to various app¬lica¬tion containers within the node; a distributed, replicated file store would address code distribution needs, and Microsoft has several such file systems either in release, in development, or in research.
Applications need to store chunks of data for a variety of uses; some are purely local to an instance, others are intended to be shared amongst all instances of the application running anywhere in the Edge Computing Network. An application would tell the blob storage infra¬structure about the replication needs for each category of blob data: local to this instance, local to the node or a set of nodes, and/or rep¬li¬cated to a backing store in one of Microsoft’s large-scale datacenters. The blob store would be built on top of the Cheap File Store (used by Windows Live Spaces), Blue (used by Windows Live Mail), or some variation on those which accommodates the extended rep¬lication needs of ECN.

Some applications need transactional storage; again, replication needs would vary by ap¬plication. For properties needing distributed database semantics, a service like CloudDB would be provided. Properties needing a database purely local to a node would use SQL Server on top of local non-replicated storage; a property which needed off-node replication of data for business-continuity and backup could build such a mech¬an¬ism using existing and forthcoming services provided by the major datacenters (e.g. Blue).

Overlap of CDN and Distributed Computation Infrastructure

While the features and implementation of the CDN and Distributed Computation infra¬struc¬tures have been described separately, there’s no reason they need to remain separate in practice. Over time, the various systems which provide CDN functionality would become applications running within Application Containers in the ECN node. Stor¬age previously dedicated to small-object caching, large file download, and streaming would be integrated into the blob storage infrastructure of the distributed computation environment. Logging and monitoring systems would converge.

No CDN provider today does things this way; none has a general-purpose distributed com¬pu¬tation environment. The ability to dynamically reallocate resources to match needs across the spectrum of CDN and computation services is a unique advantage of the Edge Computing Network we envision.

Status of the ECN Program

An attempt was made to acquire a CDN vendor; changes in the economics and valuation of the various players in that market space drove the price of the acquisition outside the ac¬cep¬table range. As a result, negotiations have taken place to license tech¬nology from one or more CDN vendors; this technology would serve to jumpstart the ECN implementation. The agreement will include consulting services and operational assistance for a period of time, enabling Microsoft to acquire a set of best practices from a successful CDN.

A list of 24 broad locations for ECN nodes has been roughed out; selection of the first three and second three specific sites is under way. Site selection is strongly influenced by the ex¬perience of the Windows Live Infrastructure team responsible for siting and managing Mi¬crosoft’s major data centers. The current plan has the first three sites built out by the end of FY07, the second three sites built by mid-FY08, and the remaining sites from the full list of 24 coming on line by the end of FY09.

Selection of technologies for the various components of the distributed computing environ¬ment is under way. Of particular interest is the technology used to support the application container itself. An appropriate balance needs to be struck between isolation and perfor¬mance; the selected technology needs to provide enough management tools and hooks that beta-quality services can be provided in three sites at the end of FY07. Microsoft has a va¬riety of virtualization and isolation technologies entering the product stream over the next two years (e.g. Silos); ideally, deployed properties would be abstracted from the app con¬tainer technology so as to allow the ECN implementation to change over that time without requiring major rework.

Scenarios and Examples

A wide variety of scenarios were used to drive the development of requirements for the Edge Computing Network. A small set of scenarios are presented here to give a flavor of the ways properties can leverage the ECN to change the way they do things.

Cricket Match Play-by-Play

The population of nations which play international-level cricket exceeds one billion. During the course of a five-day test match involving his national team, a cricket fan is likely to be viewing a small web service which displays end-by-end descriptions of play and the cur¬rent score; users typically check the window every 4-8 minutes for at least six hours.

This type of property is ideal for deployment within ECN. Minimal state information needs to be saved, and none of it is per-user. The dynamic content (play-by-play data and current scores) is generated near the edge and can easily be injected there. Service instances can be started to meet current demand and can be shut down at the end of each day of play and at the end of a match. No servers need to be provisioned in any central datacenter; compu¬ta¬tion resources are only consumed (and thus only paid for) during a match. Archival infor¬mation (play-by-play of the previous day’s play and previous matches) can be stored using replication services; access to archival information wouldn’t require caching the data at an edge node, but the application could detect broad interest in particular archival data and elect to cache that data.

More interestingly, a peer-to-peer network could be built dynamically which would reduce the amount of network egress bandwidth required from the ECN node while increasing the speed at which updates could be pushed to the entire network of users. By taking advan¬tage of knowledge about ISP network structure (i.e. which IP address ranges belong to which ISPs) and structuring the P2P network accordingly, ISPs would also see reduced in¬ter¬net downhaul costs; information would flow from the ECN-deployed application into a small subset of an ISP’s total user base, and those users would pass the information amongst themselves within the P2P network.

Knowledge about a user’s location and interest in cricket and in their nation would be used to drive the delivery of appropriate ad content. Since update frequencies are predic¬table, the AdCenter property could spend longer than the normal 1-2 ms time budget allocated for selecting the ad which generates the highest income to Microsoft. Ad content would be dynamically cached at the ECN node; some ads (e.g. those tied very specifically to a par¬tic¬ular series of matches) would automatically expire and be flushed from the cache, while others would remain to be reused by AdCenter through other properties.

Hotmail

Today’s Hotmail service is monolithic; it isn’t easily geo-distributed, it has huge storage requirements, and it relies upon other services (e.g. address book, passport) with which it expects to be collocated. Given the capabilities of the ECN, some parts of this can change.

The Passport service itself could be deployed into the ECN. Each user’s passport profile would be stored in the ECN node closest to that users normal location with backing store in a primary datacenter. User sign-on would be much faster, since long round-trips to a single datacenter in the US would be eliminated. The secure nature of an ECN node allows us to store passwords and PII information right on the edge nodes.

Hotmail itself could cache mailbox header information in the ECN node closest to the user. By caching just headers, the page-load time for the initial mailbox view would be drama¬tically reduced, addressing one of Hotmail’s biggest competitive disadvantages against Yahoo Mail and Gmail. The cache could be pruned by code written by the Hotmail team and deployed to the edge independently from the Hotmail service code.

Intelligent pre-fetching of individual email messages could be performed; the Hotmail application code can predict which message is likely to be the next one the user wants to view. With the cooperation of an AJAX-style client, the ECN-resident component of Hotmail could use various forms of HTML and TCP compression to more efficiently send content to the client browser.

Acknowledgements

Several people have contributed to the vision and ideas described in this paper: Jeff Cohen, Vidyanand Rajpathak, Hang Tan, Ben Black, and Robyn Jerard. Thank you for your know¬ledge, review, and spirited argument.

Windows Live Wave 3 Planning Memo (August 2007) — Part 1

When Microsoft began rolling out its primarily consumer-focused Windows Live service line-up, there seemed to be little rhyme or reason to the company’s plans. Enter Chris Jones, Corporate Vice President of Windows Live Experience. In 2007, Jones, along with colleagues David Treadwell, Corporate Vice President of Live Platform Services, and Brian Arbogast, Corporate Vice President of Mobile Services, began trying to bring some discipline and regimentation to the Windows Live development effort.

In the summer of 2007, that gang of three issued a Windows Live Wave 3 planning document that
demonstrated just how much they planned to change the modus operandi of the Windows Live team. The thinking: Theme planning, milestones, vision checkpoints, and other Windows-like conventions, if successfully implemented, will make Windows Live services more predictable and reliable. (The addition of these more rigorous quality gates also risk slowing the Windows Live development pace, however.) Meanwhile, it will be interesting to see if and when Microsoft ends up acquiring Yahoo — or even if its acquisition bid ends up a distraction more than a reality — how these milestones and policies are impacted.

Here is Part 1 of Microsoft’s internal Windows Live Wave 3 Planning Memo.

TO: Windows Live Experience Team; Live Platform Services Team
FROM: Chris Jones, David Treadwell, Brian Arbogast
RE: Planning Windows Live Wave 3

INTRODUCTION

As we are nearing the completion of Windows Live Wave 2, we want to congratulate the teams on their work to date.  Windows Live Wave 2 delivers the first version of our integrated suite of software services.

Our mission is to deliver the essential suite of software and services for individuals around the world, designed to help them stay connected (browse, create, manage, and share with the people they choose, on any device) and protected (provide safety and security for their information, their families, and their devices), built on the leading platform for developers, merchants, and advertisers. We believe that users of these services will create a web of user defined content that will improve the traffic to Windows Live Search, create a valuable audience for advertisers, and enable the next generation of software and content publishers.  We believe our investments in safety, security and PC health will differentiate our experience and dramatically increase customer satisfaction and loyalty.  At the same time, we believe this combination of software and services will be an incredible benefit and differentiator for the Windows PC and Windows powered devices.  Finally, we believe that it is the work of the community of developers and content publishers that will enhance and uniquely differentiate the experience for customers.

This document is the planning memo and it outlines the shared assumptions in our planning work across Windows Live Experience and related teams in Live Platform Services and Mobile Services:

* It covers the state of the market – including customers, partners, competition, and challenges.
* It outlines our strategy for Windows Live. It sets the shared themes and bets that span all the features in Windows Live Wave 3 and investments for Wave 4.
* It outlines the feature teams and areas for investment in Windows Live, as well as the questions to be asked and answered as part of team planning.
* It sets the timeline and schedule for both our plan and Wave 2.

Together we will build a plan that spans the work in Windows Live Experience and Live Platform Services and supports the business priorities established by the Online Services Group.  This plan will cover the work and commitments for the teams over the next year.  A wave describes the deliverables that happen throughout the year, including smaller improvements that are made to our services on an ongoing basis.  As we plan, it is important to recognize that the platform group needs more notice and time to build out both back end infrastructure and operations.  So when we plan Wave 3 we need to also look out enough to plan the platform work for Wave 4. We are making a few changes from Wave 2 based on our experience as a team.

* We are going to focus our planning work on themes that span feature teams.  We hope in doing so to improve the experience for customers when using our suite together.
* We are going to enter planning as a team with a clear definition of planning themes, including click-through prototypes.  As we do this we will front load design of major cross Wave dependencies.
* We will have clear entry and exit criteria for MQ, and use MQ to invest in design and implementation of major cross team dependencies.

The following figure illustrates our planning process.

planning_memo

This planning memo starts our planning process and is intended to be used by teams to develop the vision and the feature set for their areas.  It describes bets and themes for Wave 3.  These themes in many cases span experience and platform, client and service, and are used during planning to structure our investigation of what to build.  In some cases, this means we will decide not to pursue a theme.  In other cases, we will merge themes together.  In others, we’ll narrow our definition.  And in some cases, a new set of scenarios will emerge.  As we investigate the themes, and scope what is possible, we will write the vision document, which defines the pillars for our release.  As with Wave 2, the vision document describes our team commitment to deliver.

Some people have asked “what’s the difference between a theme and a pillar?”  Themes are used in planning, and outline the areas of exploration for a Wave.  By design, we have more themes than we can fit in a Wave.  In the course of planning, we will refine the themes into work that can be achieved in our release timeframe.  This exploration, based on detailed plans from the feature teams, results in the pillars for the release.  It is possible for a theme to become a pillar.  It is also possible that in planning we merge themes or come up with a new set of scenarios that become a pillar.

In Wave 2, themes were provides as guidance to feature teams.  In Wave 3, themes will be the foundation of our planning process.  Each theme will have an owner (generally a GPM and/or PUM), a product planner, and, in most cases, a design lead.  Each theme owner will produce a presentation and a high-fidelity click through prototype for each theme.  The role of the owner will be to coordinate the investigation of each theme, working with product planning, product management, design, development, testing, and other discipline leaders.  They will work across dependent teams as they are writing their drafts and make sure that scenarios or features that span teams are covered end to end.  They will outline the proposed scenarios and customer promise.  We expect to hold these theme checkpoint meetings in late October with the theme owners.

These theme checkpoints provide scoped and refined themes to the feature teams, who will then work on planning their work for Wave 3.  Once we have scoped the themes, we will have a set of feature team checkpoints, where the teams will describe what they believe can be delivered for Wave 3 based on our themes.  These feature teams are the experts on the scenarios and specifics for their area and are responsible for building best of breed solutions to meet customer demand.  Any conflicts or disagreements between teams should be resolved as part of the checkpoint meetings. We will hold checkpoint meetings with each feature team in December, where the feature teams will outline their plans for Wave 3.

Following these checkpoint meetings, we will decide on the pillars for Wave 3 and write the vision document.  The GPMs and/or PUMs will work with Chris Jones and David Treadwell to create a single draft vision document that spans the work in Windows Live Wave 2.  This will include the value proposition, tenets, top level schedule, shared bets, and feature commitments across our teams.  We will load balance as required across teams to make sure that the themes and scenarios are delivered for the Wave.  We expect to publish the vision document in December.

The feature teams will use the vision document and resource plan to build the final feature list and schedule for their area.  Following their detailed schedules, we will have a vision week with team members and partners where we walk through the vision, demonstrate the prototypes, and commit to the shared schedule.  For Windows Live Experience teams, we will then move into M1 and coding for Windows Live Wave 3….

BIG BETS

There are a few bets we will make as a team that we know now will require work from all of us to achieve the vision.  These bets represent initiatives across the team.  In some cases, they will be realized by work that spans feature teams, and in others they will be largely owned by a single feature team.  Each feature team must understand the work required to support each bet as part of their Wave 3 planning.  As with Wave 2, we have picked a targeted number of bets for Wave 3.Windows and Internet Explorer.

While Windows Live is a service that is available across devices, we know most customers connect to their services on a Windows PC using Internet Explorer.  We have a unique opportunity to provide a seamless experience for customers who choose to use our services with Windows and Internet Explorer.

While we will target a seamless experience on Windows Vista, we will make a bet on the Windows 7 platform and experience, and create the best experience when connected with Windows 7.  We will work with the Windows 7 team and be a first and best developer of solutions on the Windows 7 platform.

Our experiences will be designed so when they are connected to Windows 7 they seamlessly extend the Windows experience, and we will work to follow the Windows 7 style guidelines for applications.  We will work with the Internet Explorer 8 team to make sure we deliver an experience that seamlessly extends the browser with our toolbar and other offerings.Search and MSN

Our network of services combined completes the experience for our customers, advertisers, and partners.  We will be on MSN as the unified portal and customized home page for Microsoft’s services.  We will bet on Live Search to connect customers to information when in our experiences.

We will optimize our experience for customers who use MSN and Live Search, and create unique experiences that work together across Microsoft’s network of services.

Beta and Service Deployment

We will invest across our suite in improvements to beta and service deployment, with a particular focus on web-based services.  We will make it possible to “self-host” and “dogfood” all services ontop of “live” data, so it easy to test and use the products before they are deployed to production.

We will invest to increase the stability and maintenance of our INT environment.  We will make it possible for customers to “opt in” to beta versions of our services so we can introduce betas and get customer feedback without updating the entire customer base.

PLANNING THEMES

Everyone should think about planning themes as the rough draft for the vision document.  They form the areas for exploration and set of questions we will ask and answer as a team as part of our planning process.  By design, we have more themes than we can fit in a Wave.  These themes in many cases span experience and platform, client and service, and are used during planning to structure our investigation of what to build.  In the course of planning, we will refine the themes into work that can be achieved in our release timeframe.  As we investigate the themes, and scope what is possible, we will write the vision document, which defines the pillars for our release.  As with Wave 2, the vision document describes our team commitment to deliver.  In Wave 2, themes were provide(d) as guidance to feature teams.  In Wave 3, themes will be the foundation of our planning process.

Refining the Web Experience

As part of our vision to help customers get to their information from anywhere, it is essential for us to have an integrated, browser-based way to deliver our services. With Wave 2, we started our integration with a shared home page and header for navigation.  This in turn uncovered a new set of seams in the experience.  We’ll add more services in Wave 3 – for example device management, RSS feeds – that will need to fit in to our experience.  We need to rethink the overall navigation model between the current header and secondary navigation in spaces.

What is the difference between the “what’s new” view on spaces and home.live.com?  Should private messaging and the inbox come together as a single concept?  Are events part of the calendar or separate?  How do we bring together the concept of home/start and dashboard, or should we?  Do we have a navigation model pivoted on people (my stuff, friends stuff) or data (photos, files, etc)? Or some hybrid?

Our customers will use both MSN and Live Search, in addition to Windows Live, and we want to support navigation across the network.  Today our header is optimized for “my view of my stuff” – showing customers their services – and is scoped to a few elements.  We have an opportunity to connect our networks together and optimize the depth scenario.  What should my view of my stuff be?  How does the experience change when viewing someone else’s space?  How do customers move from MSN to Live Search to Windows Live?  As we move to MSN as the “home page,” what is the evolution of home.live.com, how does cross navigation work? How does Office Live extend the experience?

Some customers will run on a Windows PC and have our client software and use it together with browser-based services.  In Wave 2 we delivered limited integration between our client and browser experience.  How should we change our web-based experience when we know customers are using our client software?  How does our experience change if the toolbar and /or Messenger are installed? What if the customer is on their primary PC?  How do we use our web-based suite to encourage use of our client software?

Many customers will use Internet Explorer to connect to our services.  We have a unique opportunity to extend the browsing experience for customers using Internet Explorer with our toolbar and additional services that enhance the browsing experience.  What scenarios and features will we enable that are unique for customers using Internet Explorer?  How should our web-based experience change if the toolbar is installed?  How do our services make the browser experience better?

Certain web-sites today have a flair or particular shared set of experiences that make them a family of sites.  There are “signature elements” of their design.  With Wave 3 we have the opportunity to redefine the look and feel of our site.  What is the next generation of our visual style and interaction model?  How do we take a quantum leap in our web-design?  How will Wave 3 “pop” for customers?  What should stay consistent in our web design, product taxonomy, information architecture in order to create a sense of familiarity for our customers? What should our next generation of standard controls be?  Will we pursue a ribbon or other common element?  What is our platform target for the browser?  What is our down-level target?  Mobile?  AJAX?  What is the role of SilverLight in our design?

This theme involves close collaboration with the Internet Explorer, MSN, and Search teams.  Examples of features we could build to support this theme include:

* Make it seamless to navigate between content on MSN, information on Live Search, and Windows Live services
* Dramatically update our web-based experience with professional themes and controls that have a new level of performance, quality, and interaction
* Provide a richer browsing experience when running on Internet Explorer, including roaming of favorites and browser settings, and a toolbar and suite-header that naturally extends the browsing experience.

Seamless Windows Experience

As mentioned in our bets, we will invest to deliver a seamless experience for customers who own a Windows PC. We have a unique opportunity to remove the seams between Windows, our applications, and our services. Windows Live Wave 3 will be designed so it feels like a natural extension of the Windows experience.

We have an opportunity to make it much easier for customers to “get started” with Windows Live. Our goal should be to have customers log in, type their Live ID, and then they are automatically “set up” with Live. For new machines, we want Windows Live to come with the experience and will consider investments to make this experience easy. For customers who are upgrading from Windows Vista to Windows 7, we will explore ways to make it easy for them to get Windows Live – particularly for photos, calendar, and movies where our applications complete the experience.

We will “light up” the Windows experience with Windows Live. One way to think about this goal is that from 10’ away a customer can tell that a Windows PC has Windows Live – whether through a new theme or other feature. What does it mean to “light up” the start menu, taskbar, sidebar, and folders? What happens when a customer types their Live ID in their Windows account? As an example, we could “light up” the user tile on the start menu with their picture, add presence information, and automatically replicate and roam their documents, photos, and other media. We could roam a set of Windows settings, including background bitmap and other preferences, making it easy to make one PC look like another PC. Our family safety solution could naturally extend the Windows experience for parental controls, providing reporting and content filtering as well as account management.

What’s the relationship between a Windows account and a Windows Live ID?  Should we have a LiveID connected to account settings?

The Windows 7 platform provides new enhancements that allow us to deliver even richer experiences for customers. We will invest in differentiated features that “light up” on Windows 7, and in this theme we will identify these “signature elements” – gestures, ribbon, or other – that make our suite best on Windows 7. We will explore innovations in graphics and presentation, including window management and high-DPI support, that make our applications feel distinct and “pop” on the new platform. What experience will we provide when we “light up” Windows with Windows Live? What is better with Windows 7? What experiences or scenarios are Win7 only? How do we take advantage of or lay the foundation to take advantage of some of the hardware innovations already available or planned for Windows 7?

Windows Live will have value for every Windows customer. If you have an email account and use the Internet, Windows Live will make your experience better. (add more here…) For customers who have Windows Live Messenger, we will explore using Messenger to recommend and “upgrade” their experience. For example, if a customer is using Messenger on their primary Windows PC, Messenger can recommend “getting all of Windows Live,” download the software, and enhance their Windows experience. How can we use Messenger to increase the depth of engagement of our customers in our software and service suite?

Our client software experiences today have different experiences for the user tile, toolbars, menus, spelling, and navigation. While there is a cost to sharing code, there is a benefit to customers who will have a consistent experience with our site. What are the common elements that define our client suite? What is our approach to common controls?  Should we have a shared sign in for Live ID or keep it separate? What is the evolution of setup and update?  Should we invest in other shared infrastructure – spelling, editing, parts/extensibility? Should we have “parts” that are shared between Live Writer, the Photo Gallery, and Mail, that enable connection to 3rd party services?

Beyond shared components, what are the shared scenarios for suite? Today it is hard to share photos and add a blog or start a blog and add a photo album. How can we bring our experiences together for publishing, sharing, and communication?

Many customers will use Office and Office 14, and we will work to connect these customers to our experience. What happens when a customer sets up Windows Live and uses Office? It should be easy to use Windows Live Messenger and our communication services with the Outlook client. It should be easy to publish from Office applications to Live Folders.

This theme will involve close collaboration with the Windows 7 and Office 14 teams. Examples of features we could build to support this theme include:

* Make it easy to get set up with Windows Live by typing your Windows Live ID, and automatically download the information and applications required
* Enhance the Windows desktop with Windows Live services and a new theme so customers feel their Windows PC has “come alive” after Windows Live is installed
* Support Windows 7 platform enhancements so Windows Live feels like a natural extension of the Windows system, including gestures, ribbon, and other elements
* Enable a “one-click” way to take my settings, get a Live ID, and “move them” to the service so that POP/IMAP import happens in the cloud.
* What’s our next level of investment in family safety?  What is the experience of parental controls and account management (with Windows 7)?

Translucency vs Transparency With Windows Drivers

Let’s face it. In this world where technology reigns supreme, almost everybody acts like their very lives depend on computers. And why not? From a simple homework to complicated business plans, computers are tried and tested to make life simpler. So what happens if your most trusted ally suddenly goes haywire for some unknown reasons?

Funnily enough, most try to rationalize out loud with their computers as though it actually cares if you’ll be able to meet a certain deadline or not. This is exceedingly common with IT technicians and those involved in architecture. It is actually common to see people screaming at their computers for failing them at the most inconvenient times as if it will start running perfectly out of fear of retribution. While some would pound their computers into oblivion, the rest just gives up and hands over their machine to a computer ‘expert’ hoping against all hope that their precious files can still be recovered. But as the old adage stated “An ounce of prevention is better than a pound of cure” – most of the time, all one has to do is regularly update their Windows Drivers.

A Windows Driver is basically a type of software that helps an Operating System interface with a hardware (e.g. Printer, keyboards, monitor, etc.). Drivers are usually included and/or installed along with your hardware acquisition. Communication failure between a particular software and its hardware is usually brought upon by issues with the Windows Driver. Thus, updating your drivers is necessary if you want your computer to be in its top form and don’t want your work to be plagued with confusing computer error codes, or worse, suddenly see the blue-screen of death. It is one of the main reasons why most computer experts would probably recommend that you update your Windows Driver as a first-step in troubleshooting and will definitely correct most of the simplest error codes.

As mentioned earlier, updating your drivers is more of a prevention rather than cure, so it is essential that you know when exactly is the right time to do it. This includes

(1) Switching/upgrading your Operating System,

(2) New software features is being offered by the manufacturer and is available through updates,

(3) Fix bugs and glitches in the software, and

(4) Windows Update notifies you.

While several computer amateurs may dissuade you in updating your drivers because of its perceived nuisances, remember that those updates are there for a reason and that is to help your computer run as smooth as possible.

Most software companies offer a driver update at least once a year so it is vital that you check regularly if there are any updates available for download. With the continuous progress and advancements in computer technology, it is no surprise that both software and hardware errors are quite common; an update officially serves as a straightforward solution to a potential big problem.

Computers can serve as your best friend or your worst enemy depending on the situation. A combination of common sense and a basic understanding of computers is the key to a better man-machine relationship.

When long-time Windows chief Jim Allchin passed the Windows-development torch to Microsoft veteran Steven Sinofsky in late 2006, many things changed. One of the biggest was Microsoft’s policy on “transparency.”

Following a couple of service pack code and information leaks in July 2007, Sinofsky wrote a post on his internal blog explaining why he believed “translucency,” instead of “transparency,” is the best approach for his team and for customers. Because Sinofsky’s philosophy is so integral to how Microsoft 2.0 is attempting to operate, I decided to share it. This blog post was provided to me by a source, who asked not to be named.

7/9/2007
Transparency and disclosure

Transparent. Easily seen through or detected; obvious.
Translucent. easily understandable; lucid.

Today was a pretty exciting day for the folks working on servicing Windows Vista as there were a number of breathless stories about SP1 including dates and features. These stories caught us (management) by surprise since not only have we not announced any of the things in these stories, but much of what was reported was not or will not be the case. This is not a good situation to be in and I thought I’d offer some words on how we think of “transparency” relative to disclosure.

One topic I have been having an interesting time following has been the blogs and reports that speculate about how Windows will go from being an open or transparent product development team to being one that is “silent” or “locked down”. Much of this commentary seems to center around me personally, which is fine, but talks about how there is a Sinofsky-moratorium on disclosure. I think that means I owe it to the team to provide a view on what I do mean personally (and what I personally mean to do), of course I do so knowing that just by writing this down I run this risk of this leaking and then we’ll have a round of phone calls and PR management to do just with regards to “Sinofsky’s internal memo on disclosure”. But I thought it would be worth a try.

The most important thing I believe we owe our shareholders and customers relative to how and what we communicate is that whatever communicate to people be accurate and truthful relative to the work we have going on. This does not mean free from ability to change down the road. It does not mean silence until the very last minute. What it does mean is that we should recognize the potential impact our communications can have on customers, partners, and our industry and we should treat folks with great respect because when we do disclose what we’re working on people pay attention—and they do more than listen as they make plans, spend money, or otherwise want to count on what we have to say. When we have to change our plans, modify what has been said, or retract/restate things we not only look like we don’t have our act together, but we cause real (tangible) pain to customers and partners. One need look no further than the Longhorn/Vista product cycle and the cost to the PC ecosystem of us being out there talking broadly before we really were able to speak with the accuracy our customers and partners assumed. Plans were made. Plans were remade. And then finally people just decided to wait until we really delivered, with some folks not really believing us until the DVD was in their hands, which meant they were no on board with drivers, compatible applications, or the support their customers expected. That example is close at hand, but we can look at examples for Server 2008, ship dates that came and went for any number of products, or even recent examples with Windows Live. This is a challenge that spans all of Microsoft, not just Windows.

All of these challenges come about because there is a mismatch between expectations and reality—that mismatch or gap is the heart of customer dissatisfaction. What we can do is be thoughtful about planning and then just as thoughtful about how we communicate those plans. That is what we are doing.

Customers and partners want to know about SP1 for Vista. Actually they need to know. We want to tell them. But we want to do so when our plans and execution allow that communication to be relatively definitive. We are not there yet. So telling folks and then changing the plans causes many more challenges than readily apparent. While it might sound good on paper to be “transparent” and to give a wide open date range and a wide open list of release contents, we all know that these conversations with customers don’t end with the “we’ll ship by <x> date and we’ve prioritized <quality>”. Folks do want to know “did you fix this <example> bug?” That is reasonable, but we don’t have all those answers and thus we cannot have a reasonably consistent and reliable communication…yet. We are working towards that. While there is clearly a challenge in the near term in not offering details, this challenge is much less than if we get the wrong information out there and we have to reset and unset expectations. Even among our enterprise customers, for whom this type of information is routine, we have a long history of really scrambling these most valuable customers with “information” that turned out to be “misinformation”. The difference we are trying to highlight is the difference between transparency in what we’re “thinking” and transparency in what we’re “doing”. Everyone wants to know what we’re thinking, but making it clear that those are thoughts versus “doing” is a subtlety lost on a mass audience.

So our goal as an organization is to be much more thoughtful and considerate with what we disclose. Premature disclosure might make us feel like we were helping. Heck it might even make some customers and partners feel good, and some partners might even understand of the challenges we face in managing our projects. But on the whole it did not make Microsoft a good citizen of the ecosystem and it certainly did not make us good enterprise partners. Being thoughtful and considerate means we will be just as open and just as transparent about roadmaps and plans as we ever were (meaning the contents we disclose) but we are going to work to eliminate the premature disclosure that has low reliability and high error rates—we will have the right materials for enterprise customers, brief industry analysts, and work with partners all with valuable and timely information. Notice that these audiences are our customers and partners and that a non-goal is allowing the news cycle or needs of the press to drive disclosure timing and contents.
Just as we plan the software we will plan to disclose our work. It means that we will develop the messages (so expectations are correctly set), the supporting information (so all the details are there), and the overall communication plan (so we don’t leave anyone out). Product Management owns and drives this. In many ways this is their product deliverable. Just like we don’t want people running to demo a feature hot off a build machine, we don’t want to rush to disclose until we have these plans in place. Our PMG team is dedicated 100% of the time to communicating in a planful way this information to the Microsoft field, customers, partners, and the press. They are not perfect, but like all of us their strive to do their best, learn, and improve each turn of the crank. This is a key point which is that we are trying to be new and improved with respect to disclosure, and one thing we need to do is go out and make sure we set expectations on what new and improved means and how we will be working.

But our PMG team cannot do their job effectively if they end up in reactive mode. Stories like the ones about SP1 (or similar leaks about Live Services) make getting the word out pretty impossible. It puts us on the defensive. It confuses customers. It makes it so the message we want to get out there—the features we delivered, the quality of the work, the scenarios we enable, etc.—just doesn’t make it through the cacophony of chatter about the rumors, partial information, and other guesses. Of course we can’t be proactive about how we wish to be new and improved if we are always responding to these situations.

We also have ongoing disclosure with customers and partners of all types as we get their feedback and input about how we should evolve Windows. These discussions are about what we’re “thinking” and when done in a manner in which expectations are clear are super valuable and critical. We do this in a deliberate and constructive manner. These are dialogs. They are not press releases. We work with customers. We provide tools to the field that talk about what we do know about the next releases of our products. We train the field to deliver those messages. Is there enough detail relative to expectations? Never. That is a natural outcome of making sure what we do say meets our over-arching goal of being truthful and accurate in what we say.

Some folks think that it is a good idea to tip off the press or give a customer (even under “NDA”) early details of what they are certain we will be communicating in the future. Please don’t. This doesn’t help. It only feeds the frenzy and diverts attention from doing a good job. This is especially true when we burn “news cycles” responding. The ripple effect of the SP1 stories is immense—our PR team, OEM teams, enterprise sales, IHV relations, and on and on all spent the past 24 hours (yes these folks all are on call) scrambling to address the rumors. Ultimately, this means we spend less time planning how we will talk about and disclose the work we are doing. And ultimately it just causes problems for everyone. Even if one person, somewhere and for some reason, felt like it was the right thing to do by disclosing what they believed to be the case.

All I’m asking folks to do is think before they disclose—in person, in blogs, over the phone. Our product management team owns disclosure and owns communicating with the world the work we are doing. They take this work seriously. They have a very strong desire to tell people about what we do—it is their job. They want to do this well and that takes discipline from everyone involved. Please help.

I know many folks think that this type of corporate “clamp down” on disclosure is “old school” and that in the age of corporate transparency we should be open all the time. Corporations are not really transparent. Corporations are translucent. All organizations have things that are visible and things that are not. Saying we want to be transparent overstates what we should or can do practically—we will share our plans in a thoughtful and constructive manner.

The upside of being deliberate is that we hope to exceed expectations with what we do. That is not to say that if we are silent people will expect nothing so anything we deliver is great. Rather since we will be talking all along about what we do (in a planned manner) when we show off the software it comes to intrigue and excite people because it does what we said it would, but it does so in an elegant and thoughtful manner, and that it really does what we said it would do, and it does so spectacularly well. We are different than some companies in our industry because our success is dependent and intertwined with that of thousands of other companies. We take that extraordinarily seriously and thus our communication is designed to take that into account by sharing actionable, accurate, truthful, and complete information in a timely manner—timely means that there is time to act and if acted upon the results are what we collectively hope to achieve.

Welcome to my (other) Microsoft blog

Welcome!

You’ve come to the right place if you are looking for the book site/blog that is meant to complement the book Microsoft 2.0: How Microsoft Plans to Stay Relevant in the Post-Gates Era. The book is due out in early May 2008 from John Wiley & Sons, just a couple of months before Microsoft Chairman Bill Gates relinquishes his day-to-day duties at the company he founded more than 30 years ago. As of July 1, Microsoft officially begins its next, post-Gatesian chapter.

This site will include information about the book; supporting material that didn’t make it into the printed pages and regularly updated information on the products, people and strategies detailed inMicrosoft 2.0. Because I had to submit the final manuscript earlier this spring — shortly after Microsoft made its $44 billion acquisition bid for Yahoo — the book had to be “frozen” in that moment of time. But as Microsoft’s acquisiton moves forward (or doesn’t); as more Softies quit (and new ones join); and as Live Mesh, Windows 7 and other new products begin to take shape, I’ll be covering all that and more on this site.