Tuesday, December 28, 2010

Why Exempt Wireless Carriers (Net Neutrality Rules)

I’ve spent a little more time reading about the newly passed FCC rules for internet neutrality and I’ll have more comments over the next week or two, but the first thing that jumped out at me was the FCC’s decision to exempt Wireless carriers from the provisions of the rule. It seems extremely strange to me that they would choose to focus only on the legacy fixed-line carriers for the new regulation.

According to the FCC statement after the vote:
“Mobile broadband presents special considerations that suggest differences in how and when open Internet protections should apply. Mobile broadband is an earlier-stage platform than fixed broadband, and it is rapidly evolving.”
While the facts in this statement are true, I question the ultimate conclusion that the infancy of mobile broadband suggests a lesser standard of regulation. Mobile broadband is currently the fastest growing segment of the internet.  A Fortune Magazine article suggests that a half a billion smart phones could be sold next year. That’s 500,000,0oo more people accessing the wireless web. For many people, especially those in the lower income brackets, wireless is the only method to access the internet. In the very near future, wireless broadband may end up the primary way we all access the internet. Why would Chairman Genachowski choose to focus on the tired legacy technology instead of getting ahead of the curve with wireless?

If mobile/wireless is going to take off as much as most experts predict we run the serious risk of developing competing internets with vastly different standards. While legacy fixed-line carriers will be subject to strict standards and will look to throttling, metered access and other solutions, the wireless providers will face little in the way of regulation and thus may grow up with a completely different model. This would require consumers to adapt to two different forms of the web and it would web development much more difficult. It would seem to me that such a duopolistic model will stifle innovation and hamper those who are trying to build tomorrow’s great web apps/services.

The wireless broadband experience already has enough obstacles for most users. Things like interoperability of handsets across networks (think iPhone on Verizon) or early termination fees or sharing networks. By failing to impose the tougher standards of net neutrality on the wireless providers, the FCC and Chairman Genachowski missed an opportunity to help remove some of these obstacles.

So, why did this all come about? I think another quote might put things into stark relief:
“[We] recognize that wireless broadband is different from the traditional wireline world, in part because the mobile marketplace is more competitive and changing rapidly. In recognition of the still-nascent nature of the wireless broadband marketplace, under this proposal we would not now apply most of the wireline principles to wireless, except for the transparency requirement.”
This comes from a joint Google/Verizon statement issued in August. Compare the wording of this statement with the wording of the FCC statement above. It would appear that Google and Verizon’s heavy lobbying has paid off tremendously with the FCC’s ruling. (If you’re wondering why Google has a vested interest in wireless, it’s due to the huge potential of mobile advertising they see. In the future they are hoping to generate a substantial part of their revenue from ads delivered to mobile devices. As a clear indicator of this potential one needs to look no further than Google’s acquisition of AdMob for $750Million in May).

Please note that I’m assuming a certain philosophical acceptance of net neutrality as a general principle  that is far from black and white – my point is just that if the FCC is going to impose net neutrality standards, why would they exempt the segment that might be most beneficial to consumers?

I’ll have more on the new FCC rules in the coming weeks.

Good Talk,
Tom
[Sources: http://www.economist.com/blogs/babbage/2010/12/net_neutrality, http://tech.fortune.cnn.com/2010/12/22/2011-will-be-the-year-android-explodes/, http://googleblog.blogspot.com/2010/05/weve-officially-acquired-admob.html]

Blinking Office Lights More Than An Annoyance…

We’ve all most likely been annoyed by blinking florescent lights in an office building. It seems those stark, harsh lights are designed to drive us all to migraines and make work even more frustrating. But the city of St. Cloud, Minnesota is looking to turn those lights into the next generation of wireless networks.

St. Cloud municipal offices will pilot a new technology from LVX that uses LED lights to transmit signals to special modems attached to the computers below. These modems interpret the blinking (much like a tonal modem interprets dial-tones) and then sends back messages from the computer to the lights, which are equipped with a receiver as well. Current tests show that this system can achieve speeds comparable to home DSL service (roughly 3 Megabits per second). While this performance seems to suggest that Wi-Fi would be a better fit, the point of these LED based networks is to work in Tandem with Wi-Fi to reduce the congestion on over-crowded office networks.

The really cool thing about these LED network systems is that they may actually make it cheaper to light your office. Because LEDs are so much more energy efficient than traditional office lighting. Additionally, add-ons are available to sense ambient light and dim the LEDs to save more energy. You could also change the color to direct people around the office (i.e. “Follow the green lights to the xyz meeting”). That seems pretty cool.
But what about headaches? Constantly blinking lights (blinking in code no less) sounds like a recipe for seizures and migraines. But, remember, current CFL lighting blinks at a much lower rate (about 60 times per second). These lights will blink much faster, meaning you should actually be less likely to notice the blinking (and therefore less likely to be bothered by it).

I think it’s awesome that we’re looking at different ways to communicate beyond Wi-Fi.

Good Talk,
Tom
[Sources: http://news.yahoo.com/s/ap/20101227/ap_on_hi_te/us_tec_internet_via_lighting, http://dvice.com/archives/2010/12/flickering-offi.php]

Net Neutrality Becomes a Reality

Yesterday (12/21) the FCC voted 3-2 to impose net neutrality standards on ISPs. I’ve talked a lot about net neutrality on this blog and I’m pretty happy to see the FCC taking some action (though I’d have much preferred that some body of elected officials taken action). I have not yet had a chance to fully digest the specifics of the ruling (it’s been a busy few days at work…).

I’ll comment in full in the near future, but for now here is the announcement and some links to reactions/commentary/etc.

Official FCC Announcement:
http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-303745A1.doc
Chairman Genachowski’s statement:
http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-303745A1.doc
Steve Wozniak weighs in:
http://www.theatlantic.com/technology/archive/2010/12/an-open-letter-to-the-fcc-regarding-net-neutrality/68294/
Likelihood of a Republican Congressing Overturning:
http://thehill.com/blogs/hillicon-valley/technology/134817-analyst-congress-unlikely-to-overturn-net-neutrality
Kevin Fogerty of IT World:
http://www.itworld.com/government/131583/what-you-lost-fccs-net-neutrality-ruling
Mashable:
http://mashable.com/2010/12/21/fcc-passes-net-neutrality/
MG from TechCrunch:
http://techcrunch.com/2010/12/21/verizon-google-fcc-net-neutrality/
Alexia from TechCrunch:
http://techcrunch.com/2010/12/21/fcc-net-neutrality-vote-is-just-the-beginning/

That’s all for now. That’s plenty of reading (some of it I haven’t gotten all the way through yet) that should give you a pretty good idea of the reaction. I’ll provide my own analysis soon.

Good Talk,
Tom

Sunday, December 19, 2010

Brad Burnham and USV Weigh in on Net Neutrality

I’ve written quite a bit about net neutrality over the last couple of months. I’ve learned a lot about the issue while reading and writing about it. I’ve gotten a few things right, and a couple wrong, but hopefully I’ve at least highlighted the importance of the issue. This evening I was reading a few of the blogs I regularly read when I came across a post by Brad Burnham from Union Square Ventures that talks specifically about the changes the FCC will take up on December 21. The crux of Brad’s proposal is based on language suggested by Barbara van Schewick, a professor at the Stanford Law School:


A non-discrimination rule that bans all application-specific discrimination (i.e. discrimination based on applications or classes of applications), but allows application-agnostic discrimination.
The great thing about this approach is that it seems to answer the big objection of the ISPs to any net neutrality rule. ISPs can still regulate bandwidth to protect the health of their infrastructure. They can still throttle networks to handle demand, and they can still charge more for faster/better service. But they can not discriminate based on the application or type of application. In other words, an ISP could not slow only video traffic or any iTunes traffic. This should – and I emphasize “should” – prevent ISPs from stifling start ups and new innovation.


I encourage you to read Brad’s post in it’s entirety here. You can also see a video of Barbara van Schewick talking about net neutrality here. (warning: it’s almost 2 hours long). I would love to hear other thoughts on this proposed idea. It makes a lot of sense to me, but, like any proposed rules, there are sure to unintended consequences that I’m not seeing.

Good Talk,
Tom

Wednesday, December 15, 2010

YouTube Offers Paid Rentals? Will It Work?

I’m apparently a bit behind the curve on this one. I was just messing around on YouTube (it’s 1AM, what else should I be doing?) when I noticed the sidebar offered the ability to rent Reservoir Dogs. I was pretty shocked, as I’d never before considered paying for something on YouTube and didn’t realize they were experimenting with different revenue streams. So, I did what anyone in my generation would do: I Googled it. (Side note: Firefox does not recognize Google as a verb…I feel like that should change).

Well, it turns out that this little experiment started a couple years ago (apparently quite quietly). Google opened a store at www.YouTube.com/store that is accessible only in the United States. YouTube offers users the opportunity to rent movies for 24 hours for a fee of anywhere from $1.99 to $4.99 (at least that’s what I saw in the few minutes I spent on the site). So, while this was news to me, it wasn’t exactly shocking. It’s been clear for a long time that if Google hoped to make any money off it’s YouTube deal it would need to find new ways to monetize content and new streams of revenue. This appears to be experiment aimed at doing just that.

So, what do I think: First off, the site navigation is TERRIBLE. In fact, the entire customer experience (at least up to the point of sale, where I stopped) is aweful. Navigation is just too complicated. Filtering sucks, the search feature is not good (this is a Google company, remember), there’s no customer reviews, no suggestion/recommendation engine, no favorites, there’s preview available. It just strikes me as what an online movie rental site would look like it was launched in 1999. From same people that brought you Gmail and, more importantly, the original YouTube, this is hugely disappointing.

I’m wondering if part of the problem is the way we shop for movies vs the way we (and by “we” I mean “I”) use YouTube. For me, most of the fun of YouTube is the serendipitous discovery made possible by clicking through link after link of the suggestions panel. Beyond the video that caused me to go to YouTube in the first place, I never have an expectation to what I might find. This is completely different from the way I search for a movie to watch. While I might not know the exact movie, or even the movie type, I don’t want to allow serendipity to play as large a role. When I’m committing a couple hours and a couple bucks, the decision process is much different that when I’m investing no money and only a few minutes. In other words, I’m more risk averse and thus in need of better guidance from the tools employed to help me out. So that’s a big reason why I think the interface/customer experience feels so wrong for the YouTube store.

I also think it’s a tough market for YouTube to break into. Between Netflix, Redbox, your local video store, set-top boxes, Hulu etc I have a hard time seeing a niche for a YouTube store. (I’ll admit that Google TV could be a game changer, but I’m not familiar enough with it to comment, so I’m going to pretend it doesn’t exist…) When I want to rent a movie I go to Netflix and watch one. Or, if I’m not a Netflix subscriber (or they don’t stream the movie I want) I open up iTunes and rent it there. Given that my top of mind impression of YouTube (from years of experience) is grainy, shaky, homemade videos of people doing ridiculous things, I just don’t see myself ever using the service. (not to mention the buffering of flash videos compared to silverlight). I can’t imagine a time when I’d skip Netflix and iTunes (and Hulu and Comcast on Demand) for the YouTube store. Also, I’ve heard (can’t confirm or find a reputable enough source to quote) that the average age of YouTube users in under 16. These are people that have grown up on free video, torrent sites, etc. Add to the fact that they don’t have credit cards and seems like a tough sell. Seems unlikely that they will suddenly decide to pay for content. Lastly, the online video on demand market is still comparably small.

Now, the other side of the coin: GoogleTv is coming. Getting videos off the computer screen and onto the TV could make a huge difference (something Netflix noted when it rolled out streaming to your TV). I think with a major site redesign, lots of help from Google TV, the ability to stream onto an iPad (maybe wit 3/4G), YouTube could turn this into a winner. With YouTube revenue estimates ranging between $450Million and $1Billion this year, the company can afford to support the store until it catches on.

While I’m two years late to notice, I will be watching to see how this plays out. Until then, I’m keeping my Netflix subscription.
Good Talk,
Tom

Monday, December 13, 2010

Video: Leading with IT

Below is an interesting video of an MIT class being led through a case study. It's a long video, but Q&A starts at minute 22. Personally, I think it's worth watching as it touches on a number of important points relevant to IT and business professionals, not the least of which is to understand what hidden resources you have available and how technology can help capitalize/cash in on these resources.

Cloud Computing, Flexibility, Security and Wikileaks

I read two interesting articles in the past 10 days that highlight the security benefits corporations can gain by embracing the cloud. It's somewhat ironic how the first article involved Wikileaks, the infamous website known best for leaking classified US and foreign government documents, seeking refuge in the cloud by moving onto Amazon's E2C cloud computing service while the second article explains how Amazon's ability to scale was instrumental in protecting it from attack by supporters of Wikileaks.

Some back story might be in order: On November 28, 2010 Wikileaks, already well known for leaking classified cables of Foreign governments, released the first in a series of leaks or US government documents. There was a near immediate uproar in America and two clear sides emerged rapidly. On one side were those who believed the leaking was at a minimum, harmful to the US and in poor taste, and many even going so far as to call it treasonous. On the other side of the issue were those who defended Wikileaks and who believe that open dissemination of information will ultimately lead to a healthier society.

As the battle raged in the media, hackers tried taking matters into their own hands. Within a matter of days Wikileaks servers were overwhelmed by a Distributed Denial of Service (DDOS) attack. Wikileaks quickly moved there server a couple times, but each time were taken offline by the DDOS attack. Eventually Wikileaks settled on Amazon's E2C cloud computer service. The attacks against this new host were entirely unsuccessful as Amazon was able to add capacity rapidly to counteract the spike in traffic from the DDOS attack. On December 2, 2010, however, Amazon, bowing to public pressure and the threat of a boycott, removed Wikileaks from their servers and refused to host the site.  (Around the same time, MasterCard and Visa froze Wikileaks accounts and stopped processing donations/payments to the company).

In response to Wikileaks being booted from Amazon servers, hackers who supported Wikileaks embarked upon a campaign of retribution. On December 8, 2010, Visa and Mastercard websites were attacked and temporarily taken offline by supports of Wikileaks in response to those companies ceasing to work with Wikileaks. Amazon was similarly attacked, however, the web giant was largely unaffected. One anonymous hacker tweeted "We can not attach Amazon, currently. The previous schedule was to do so, but we don’t have enough forces." It seems that no matter how much traffic they sent to Amazon's servers, the company was able to respond with additional capacity to counter the attack. 

This episode shows the incredible flexibility available with cloud computing (as well as Amazon E2C's resilience).  As more companies move into the cloud, I would expect to see the effectiveness of DDOS attacks abate somewhat.

Good Talk,
Tom 

[Sources: http://centerstance.wordpress.com/2010/12/11/cloud-computing-security-and-amazon-why-elasticity-matters/, http://www.infoworld.com/d/cloud-computing/can-cloud-computing-save-you-ddos-attacks-306, Wikipedia.com]

Sunday, December 12, 2010

Open Platform vs. Closed Platform - Mobile OS

I'll start by saying that Fred Wilson of Union Square Ventures inspired this post when he wrote the following:

"Think RIM is going to struggle more and more every day. Moves like they are making against Kik, which provides cross platform BBM, are likely to come back to haunt them. They should be making it easier for their users to chat with iPhone and Android users, not harder. Open platforms win and closed platforms die. And RIM still does not get what being an open platform means."

Fred is a very influential tech venture capitalist who has backed companies like Twitter, Foursquare, Tumblr, and Zynga long before they became household names. When he has an opinion on the future of mobile technologies people sit up and take notice.

I blogged exactly 11 months ago about where I thought the mobile application future was headed (another post inspired by Union Square Ventures) and I wrote then that "I honestly can not see anything other than an open development standard emerging (and I’m loath to bet against Google), but I’m excited to see this all play out." A mere 11 months later it is still too soon to tell if I (and Fred and Brad) are right, but the numbers are starting to bear us out. Currently Google's share of the smart phone OS market is growing at 6.5% per quarter. RIM (blackberry) lost 3.5% in the last quarter and Apple (iPhone) stayed relatively stagnant at +0.8%. If this trend continues I imagine Apple and RIM, both closed platforms, will find themselves significantly trailing Google, an open platform, within the next 5 years. (Obviously, I think the iPhone will get a rather large, though temporary, bump when it is finally released on Verizon - especially if it is a 4G/LTE version. However, I think the long term trend will remain unchanged.

I'm still excited by coming developments in mobile technology (LTE on Verizon anyone?) as our phones and networks get ever faster and more powerful. And, I'm willing to bet that as the technology improves developers are going to be less willing to put up with stringent controls on the distribution of their work.

Maybe I'll re-visit this post in another 11 months.

Good Talk,
Tom

US CIO Announces New IT Policy

The CIO of the United States Government released a detailed 25 point plan on December 9, 2010 that aims to restructure the federal government IT by creating a "cloud first" policy as well as reducing the number of federal government data centers by 800. While nearly everything in the document  has been talked about by the government at one point or another over the last few years, this document really ties it all together and seems to create a path toward an actionable plan.

Two key points that stood out to me were the adoption of a cloud-first policy and the consolidation of data centers.

The section on the "cloud-first" policy talks about a web-based video company (I'm not sure the exact company, but think YouTube or the like) and how it was able to scale from an initial demand of 25,000 customers to 250,000 within three days and ultimately to 20,000 new customers per hour. In contrast the government run cash for clunkers program (officially called Car Allowance and Rebate System) had poorly scalable IT infrastructure and when demand exceeded estimates, the system crashed repeatedly. It took more than a month for architects and developers to stabilize the environments.  The document states that cloud technologies are economical, flexible and fast. The plan calls for each federal agency to identify three "must-move" services within the next 3 months, and then actually move all three to the cloud within the next 18 months.

The second initiative that caught my attention (actually the first one in the plan - I'm summarizing out of order), was the goal to eliminate 800  data centers by 2015. The document lays out 3 key steps to do so:

1)Identify program managers at the agencies to lead the consolidation effort
2)Launch a data center consolidation task force
3)Create a publicly available data center dashboard to track the consolidation progress

As I've written about before, data center consolidations can be tricky and I think point number one is going to be the most critical. Strong program managers are essential to the success of the plan. If all goes well, the government can save a lot of money by eliminating so many data centers (I'll leave a discussion about the social effects of the resultant job cuts to someone else).

The entire 25 point plan is worth reading (it's only about 40 pages) and can be found here:25-Point-Implementation-Plan-To-Reform-Federal-IT

I'm glad to see some quality discussion about how the government can adapt IT best practices. Hopefully quality action will follow.

Good Talk,
Tom

Friday, December 10, 2010

System Access: Adding, Deleting, Maintaining User Profiles

I recently started on a new project at work. I'm in a fairly non-technical role on a technical project. I've been there just over a week now and I finally feel like I understand what's going on, where I fit in, and where the project is headed (for better or worse). What's sad, is that even after 11 days, I can not (officially) log onto my clients computer systems. I have no badge. I've been roaming the building for days, but I officially do not exist.

This is an example of poor access controls. Despite having no log-in of my own, I've spent hours navigating around their network and learning what I need to learn (that might sound hacker-ish....It's not meant to. The project I'm working on uses a SharePoint site and a few other cloud-based tools for tracking an collaboration). Despite having no badge or key card of my own, I spend days navigating around their buildings (plural). This got me thinking today about how IT departments handle access issues. Anyone who works as a consultant regularly deals with getting acclimated to new cities and getting access to new client systems. But, for reasons I don't fully understand, some clients are extremely proficient at the task of managing users and others are not. I've decided it mainly comes down how company leadership balances security policy with a need to get things done.

The utility companies I've worked at are, for good reason, extremely focused on security. It's near impossible to enter a building or log onto a system without a badge or a system profile. Most of the buildings I've been in have gates in the lobby that alarm if you don't swipe your card, they have elevators that require a key card to operate, they have uniformed security in fixed and rotating positions. Additionally the systems are configured so that a user can only be signed in from one location at a time (no sharing of log-ins), that accounts will lock out if the password is entered incorrectly too many times, etc. They can not be accused of being lax on security. At the same time, I've never waited more than a few days for all the access I need. I think in part this is due to the fact that Utilities operate in a regulated environment with  tightly controlled costs and margins. Any delays can adversely affect profitability and costs which in turn can annoy powerful regulators.

I can contrast this with a media company I worked at that seemed mostly unconcerned with security. On day one I was handed a plain, unremarkable Kastle card and a form for access to the building. I was told to try to turn the form in by the end of the first week, but that the Kastle card was already active and I had full access to any floors I needed (mind you, my client only had offices on two floors). When I met my client sponsor upstairs (not in the lobby), he gave me a user name and password from a former consultant who had left a few weeks earlier. He said it would get me up and running until my paperwork was processed. The upside is that I was working full-bore on day one. The downside is that IT security was nearly non-existent.I mean, I was using the log-in of a guy who'd been gone for 3 weeks...why did it even still work? These things should be turned off right away.

Then I've worked at government agencies. These guys take security to a whole new level. I had to go to the client site two weeks before my start date to pick up paper work, get finger printed, and sign some forms. Then, on day one I was given a guest pass and an escort. These two things stayed with me for two weeks. I could not carry my own laptop into the building until it was certified as virus/malware free by IT and was given a special tag to denote it as "non-government property". I was required to leave my blackberry in the car because it had a camera (I eventually bought a new one with no camera). Needless to say, for the first couple of weeks my productivity was less than optimal (though my bill rate was pretty good!). Because of the sensitivity of the agency I was working with and the nature of the material I was handling, security was a vastly higher priority than cost or profitability.

What it all comes down to is finding the right balance of security vs profitability for your company. For a highly sensitive government agency, it makes sense that efficiency would take a back seat to security. For a small media company it might be OK to relax a bit on security. And for a utility company, it's probably right that they fall somewhere in between.

A few tips to get up and working as fast as possibly:

1)Start the process and paperwork as soon as possible. If you know weeks ahead of time that you're going to start on XYZ day, call your client and find out what can happen ahead of time. Sometimes providing a name and DOB can really get the ball rolling. Likewise, if you're a buyer, reach out to your consultants and request the necessary information. There is no reason to wait until the consultant is on the ground (charging your company)

2) Do not give out your password. I don't know how my client got the old consultants log-in information, but make sure to protect your own. Change passwords regularly, do not use the same password for everything and never give it out. If you're a buyer, delete access as soon as possible after someone leaves, require password changes frequently, and never encourage password sharing.

3) Grant only the access needed. If someone only needs access to one system, provide only that access. If they only need access to one floor in a building, do not give them access to all the floors. And if you are a consultant (or an employee even), don't attempt to access things you don't need. This will only open you up to liability.

Good Talk,
Tom

Thursday, December 9, 2010

Vendor Management

When you work in IT it's a fact of life that much of your time will be spent working with vendors and consultants. The field has become extremely specialized and it often makes sense to work with specialists who have years of experience mastering the issue that you are facing for the first time. Oftentimes a group of skilled contractors can design, build, implement, test, or upgrade a system faster and cheaper than a companies own resources could. (There is a reason IBM moved out of making computers and into consulting). It often makes perfect sense to outsource certain problems (in fact, sometimes it's the only way).

However, working with contractors comes with its own set of risks. While you generally get skilled resources who can be on-site and up to speed very quickly, you can end up surrendering a lot of control over the project(s). If seen first hand contractors differ greatly with their clients over how the project should proceed or what the time line needs to be. That's why it is important that, from the very beginning, buyers carefully develop and manage the relationship with the consultants/vendors. There are two primary steps to include in this management process that happen BEFORE the vendor starts working:
Pomerene & Redfield (Courtesy of LOC)

1. Vet the Contractor
Oftentimes buyers are far less familiar with the process than the vendors are (a buyer may sign one major IT project in his career, whereas consultants will sign 5-10 every year) and thus they rely heavily on advise from the very people selling them the services. There is nothing necessarily wrong with this - a big part of why companies pay high fees is for the consultative advice. But, a buyer needs to be sure he vets the conclusions of potential vendors with others in the industry.
  • Ask for references, call (or better yet visit) other clients of the firm you are considering.
  • Talk to other consulting firms currently working at your company - You'll get sometimes biased opinions, but at least you'll hear the other side
  • Talk to former clients -find out what went well, what went wrong, etc
2.  Vet the Contract
Despite the promises of the salesman (or saleswoman) you've been talking to for the last 6 weeks, the only thing that matters once the project starts is what the contract (also called a statement of work) says. You may have been told that the company can do XYZ for you, but if you don't write it in, you can rest assured that it's either not getting done or it's going to cost you extra.
  • Lay out a specific time line
  • Include penalties for non-performance/missed deadlines
  • Define scope as clearly as possible - what will the contractors do, what will the client be required to provide for them to do it, what is specifically excluded under the contract?
  • Define who ultimately owns what - vendors/consultants defend their Intellectual property/capital rights aggressively. Make sure to spell out what work papers, processes, tools you will own when the project is over. 
Obviously, there is a lot more to negotiating with a vendor than just what I've laid out above, but you would be surprised by how many companies fail to do even this little bit. Too often the big name vendors come and buyers just trust that they know what they're doing. Or, you'll get a really strong first contract, then each successive add-on project starts with a weaker and weaker contract. Ultimately, you'll end up spending millions of dollars a year while hearing excuse after excuse why the project is not done. If you don't have specific penalties/separation criteria in the contract and something goes wrong you'll be left in a really tough spot.

 Good Talk,
Tom

Sunday, December 5, 2010

Data Center Program Management


Data center migration programs require coordination across multiple departs/divisions within a company and can be incredibly complex. Additionally, such programs typically involve millions of dollars and the company's most critical applications. As such it is important that these programs are well thought, well planned, flawlessly executed, and rigorously monitored. Below I offer five tips to help ensure successful program management.

1) Use Tools that Work: 
 While "one-size fits all" might work well for hats or gloves, it is not a good idea in Data Center Migration program management. I've seen some clients with incredibly over-engineered tools and processes. Others have no standards at all. What's important is not the complexity or simplicity of the tools, it is, rather, to ensure that the tools fit the culture and capabilities of the company and that the meet the data transparency needs of the program. Some companies might be OK with basic excel spreadsheets, but would be overwhelmed by complex, custom built PM tools. Other companies are going to require something more involved and will be comfortable adapting to a new tool. Companies should understand that there are not one-size fits all tools, but only one size fits now tools. 



2) Adapt Best Practices
It is critical that any Project Management Office (PMO) adapts as the project moves forward. As more information becomes known and lessons from early successes and failures are learned, the project managers must discard broken processes and ineffective tools, while promoting those that work well. Additionally, and equally as important, the PMO must also search for best practices wherever they exist. We all bring best practices from our own experience, but a great PMO will look outside the company and will look to other departments and programs within the company. A great PMO will get better as the project progresses. 



3) Engage Key Leadership
I really probably should put this first, as it is by far the most important factor in project success or failure. I don't mean a strong project manager when I say engaged leadership, I mean the project must have an active executive within the organization who checks on the project regularly, holds the project team accountable for deadlines and goals, and champions the program to others within the company - removing roadblocks and opening avenues for collaboration. Additionally, a engaged executive can often expedite key decisions. 


4) Measure, Measure, Measure
It is an old cliché that "what gets measured gets done", but it's a persistent cliché for a reason. It's true. When designing program plans, it is essential to ensure that the proper metrics are created and the proper deadlines set.  Equally as important, these deadlines and metrics, as well as progress against them, must be clearly and regularly communicated to the project team members. 


5) Know What You Are Migrating
photo by Lawrence Berkley National Laboratory (Flickr Creative Commons)
When migrating from one data center to another, it is inevitable that servers and applications are going to come offline, sometimes for hours at a time. The important factor is to make sure the outage is well publicized to all stakeholders. Oftentimes the applications on a server impacts users far beyond the core project team, the IT department, and even the office location. The project team must identify EVERY application on the server (as well as all storage files, back ups, etc. I can tell you from experience, that failing to identify even one of these applications can have catastrophic consequences.  This is definitely one of those situations where you need to check, double-check, then check again. Make sure you know what you're moving. 


While this list is by no means comprehensive. Data center migration program management is tremendously complex and could hardly be covered 30 blog posts, let alone one. But keeping these 5 points in mind will minimize the risk for the program. 


Good Talk,
Tom

Saturday, December 4, 2010

E-Health Care

I came across a great satirical video regarding the sorry state of technology in our health care system. If Air Travel worked like health care...that's a scary thought.



Please note:  I'm reposting this video here under fair use standards. All rights remain with the original creators.

Google and Groupon - Who is Crazier?

I was just reading that Google offered to buy the social-commerce site Groupon.com for around $6 Billion…at first I swore the number must have been a typo. Google must be crazy to pay that much. Then I came across another article reporting that Groupon was going to reject the Google bid and remain an independent company. Right now my head is spinning. I can’t decide who’s crazier. $6 Billion seems like way too much money for a website/company whose sole source of revenue is derived from small business.  Furthermore, according to nearly all reports I’ve read (which I will freely admit is not a comprehensive sample), nearly half of the businesses that have tried Groupon say they will not use the service again.


There is a fundamental issue with Groupon from the perspective of a small business. While the exposure and influx of new customers might seem like a positive in the short-term, one must consider the long-term cost. First off all, the business is discounting their respective product or service by 50% or more. On top of that, they need to pay Groupon 50% of the revenue they earn from the promotion. In essence, the company is giving up 75% of their revenue, destroying margins and eliminating profitability (except in a few select industries with extraordinary mark-ups).


Now, you could argue that it’s ok to lose money on a promotion as long as the company is gaining customers and setting the stage for long-term growth and profitability. Unfortunately, that is not what happens in an overwhelming majority of the cases. As has been demonstrated by academic theory and real-world study, price-promotions generally harm long-term brand value (see here). In short, value consumers rarely become brand loyal consumers. They will hop from one phenomenal deal to the next seeking steep discounts along the way. These are not the kinds of consumers on whom you want to base your business.

Groupon is currently enjoying extraordinary success. The founders should be extremely proud of the business they have built and of the impact they have had on group buying and social media. Each day more and more consumers sign-up and take part in the daily Groupon.  However, Google must recognize that the customers that Groupon needs to worry about are the small businesses. As the group coupon/group buying space matures competitors will emerge and small business will get smarter. I have serious doubts that the business can scale enough to cover a $6 Billion purchase price. I’d love to hear other thoughts. Who is crazier? Or are they both perfectly sane?


Good Talk,
Tom

Wednesday, December 1, 2010

Electric Cars and Electric Rates - What Policy/Rate changes are coming?

 I've spent a lot of time thinking about the future of our electric grid and how it will change with the advent of a truly smart grid as well as the widespread adoption of electric cars. One of the biggest constraints of electricity generation and delivery is that we can not (to date) effectively store electricity. We either use it or lose it.


We're currently working on a smart grid that will allow energy to flow from the utility company to the consumer, and also the other way. This will allow those with wind mills or solar panels to sell energy back to the grid.


Electric cars rely on huge batteries to charge at night and then power the car during the daily commute. One of the really exciting things about widespread adoption (should it occur) is that we will have the largest collection of batteries in the history of man. If you charge your car fully over night (when electricity is cheaper), then drive a few miles to work you're not going to deplete the whole battery. There will be a lot of stored charge.


Imagine being able to plug in at work and sell that electricity back to the utility at peak rates (after having paid off-peak rates to charge it over-night). For this to work, however, you will need a personal utility account (like a debit card). Otherwise, your employer would get the credit (since it's probably his plug).


There are, of course, other uses: What if you get to a friend's house for dinner and your car is running low? Should your friend have to pay for you to charge your car? Of course not!


We will need to separate the idea of a utility meter from a house/office. Currently utilities track premises. In general there is one premise per home. I think in the future it might look more like one premise per person.


There are also other options utility companies are looking at. According to a recent Wall St Journal article, some utilities are offering a "all you can eat" rate plan specificially for charging electric cars (Detroit-Edison), others are offering lower rates and free charging stations. This does not solve the problem of charging away from home, but it can provide incentive to move to an electric car. The same article suggests that someone driving a Nissan Leaf 100 miles a day (which admittedly seems like a lot) can save more $350 a month compared to driving a traditional internal combustion car getting 25MPG.


 I think the real interesting question is "how will governments respond to electric cars?" I imagine we'll see some free/heavily discounted public charging stations at least initially. Just as cities/states provide roads for our cars, I think it's reasonable that they will provide power to speed the adoption of electric vehicles (and by "free", of course I mean tax payer funded...)


 I'll be interested to see what other policy changes emerge as a result of electric cars.


Good Talk,
Tom


[Source: http://online.wsj.com/article/SB10001424052748703882404575519641915241922.html]