Tuesday, December 28, 2010

Why Exempt Wireless Carriers (Net Neutrality Rules)

I’ve spent a little more time reading about the newly passed FCC rules for internet neutrality and I’ll have more comments over the next week or two, but the first thing that jumped out at me was the FCC’s decision to exempt Wireless carriers from the provisions of the rule. It seems extremely strange to me that they would choose to focus only on the legacy fixed-line carriers for the new regulation.

According to the FCC statement after the vote:
“Mobile broadband presents special considerations that suggest differences in how and when open Internet protections should apply. Mobile broadband is an earlier-stage platform than fixed broadband, and it is rapidly evolving.”
While the facts in this statement are true, I question the ultimate conclusion that the infancy of mobile broadband suggests a lesser standard of regulation. Mobile broadband is currently the fastest growing segment of the internet.  A Fortune Magazine article suggests that a half a billion smart phones could be sold next year. That’s 500,000,0oo more people accessing the wireless web. For many people, especially those in the lower income brackets, wireless is the only method to access the internet. In the very near future, wireless broadband may end up the primary way we all access the internet. Why would Chairman Genachowski choose to focus on the tired legacy technology instead of getting ahead of the curve with wireless?

If mobile/wireless is going to take off as much as most experts predict we run the serious risk of developing competing internets with vastly different standards. While legacy fixed-line carriers will be subject to strict standards and will look to throttling, metered access and other solutions, the wireless providers will face little in the way of regulation and thus may grow up with a completely different model. This would require consumers to adapt to two different forms of the web and it would web development much more difficult. It would seem to me that such a duopolistic model will stifle innovation and hamper those who are trying to build tomorrow’s great web apps/services.

The wireless broadband experience already has enough obstacles for most users. Things like interoperability of handsets across networks (think iPhone on Verizon) or early termination fees or sharing networks. By failing to impose the tougher standards of net neutrality on the wireless providers, the FCC and Chairman Genachowski missed an opportunity to help remove some of these obstacles.

So, why did this all come about? I think another quote might put things into stark relief:
“[We] recognize that wireless broadband is different from the traditional wireline world, in part because the mobile marketplace is more competitive and changing rapidly. In recognition of the still-nascent nature of the wireless broadband marketplace, under this proposal we would not now apply most of the wireline principles to wireless, except for the transparency requirement.”
This comes from a joint Google/Verizon statement issued in August. Compare the wording of this statement with the wording of the FCC statement above. It would appear that Google and Verizon’s heavy lobbying has paid off tremendously with the FCC’s ruling. (If you’re wondering why Google has a vested interest in wireless, it’s due to the huge potential of mobile advertising they see. In the future they are hoping to generate a substantial part of their revenue from ads delivered to mobile devices. As a clear indicator of this potential one needs to look no further than Google’s acquisition of AdMob for $750Million in May).

Please note that I’m assuming a certain philosophical acceptance of net neutrality as a general principle  that is far from black and white – my point is just that if the FCC is going to impose net neutrality standards, why would they exempt the segment that might be most beneficial to consumers?

I’ll have more on the new FCC rules in the coming weeks.

Good Talk,
Tom
[Sources: http://www.economist.com/blogs/babbage/2010/12/net_neutrality, http://tech.fortune.cnn.com/2010/12/22/2011-will-be-the-year-android-explodes/, http://googleblog.blogspot.com/2010/05/weve-officially-acquired-admob.html]

Blinking Office Lights More Than An Annoyance…

We’ve all most likely been annoyed by blinking florescent lights in an office building. It seems those stark, harsh lights are designed to drive us all to migraines and make work even more frustrating. But the city of St. Cloud, Minnesota is looking to turn those lights into the next generation of wireless networks.

St. Cloud municipal offices will pilot a new technology from LVX that uses LED lights to transmit signals to special modems attached to the computers below. These modems interpret the blinking (much like a tonal modem interprets dial-tones) and then sends back messages from the computer to the lights, which are equipped with a receiver as well. Current tests show that this system can achieve speeds comparable to home DSL service (roughly 3 Megabits per second). While this performance seems to suggest that Wi-Fi would be a better fit, the point of these LED based networks is to work in Tandem with Wi-Fi to reduce the congestion on over-crowded office networks.

The really cool thing about these LED network systems is that they may actually make it cheaper to light your office. Because LEDs are so much more energy efficient than traditional office lighting. Additionally, add-ons are available to sense ambient light and dim the LEDs to save more energy. You could also change the color to direct people around the office (i.e. “Follow the green lights to the xyz meeting”). That seems pretty cool.
But what about headaches? Constantly blinking lights (blinking in code no less) sounds like a recipe for seizures and migraines. But, remember, current CFL lighting blinks at a much lower rate (about 60 times per second). These lights will blink much faster, meaning you should actually be less likely to notice the blinking (and therefore less likely to be bothered by it).

I think it’s awesome that we’re looking at different ways to communicate beyond Wi-Fi.

Good Talk,
Tom
[Sources: http://news.yahoo.com/s/ap/20101227/ap_on_hi_te/us_tec_internet_via_lighting, http://dvice.com/archives/2010/12/flickering-offi.php]

Net Neutrality Becomes a Reality

Yesterday (12/21) the FCC voted 3-2 to impose net neutrality standards on ISPs. I’ve talked a lot about net neutrality on this blog and I’m pretty happy to see the FCC taking some action (though I’d have much preferred that some body of elected officials taken action). I have not yet had a chance to fully digest the specifics of the ruling (it’s been a busy few days at work…).

I’ll comment in full in the near future, but for now here is the announcement and some links to reactions/commentary/etc.

Official FCC Announcement:
http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-303745A1.doc
Chairman Genachowski’s statement:
http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-303745A1.doc
Steve Wozniak weighs in:
http://www.theatlantic.com/technology/archive/2010/12/an-open-letter-to-the-fcc-regarding-net-neutrality/68294/
Likelihood of a Republican Congressing Overturning:
http://thehill.com/blogs/hillicon-valley/technology/134817-analyst-congress-unlikely-to-overturn-net-neutrality
Kevin Fogerty of IT World:
http://www.itworld.com/government/131583/what-you-lost-fccs-net-neutrality-ruling
Mashable:
http://mashable.com/2010/12/21/fcc-passes-net-neutrality/
MG from TechCrunch:
http://techcrunch.com/2010/12/21/verizon-google-fcc-net-neutrality/
Alexia from TechCrunch:
http://techcrunch.com/2010/12/21/fcc-net-neutrality-vote-is-just-the-beginning/

That’s all for now. That’s plenty of reading (some of it I haven’t gotten all the way through yet) that should give you a pretty good idea of the reaction. I’ll provide my own analysis soon.

Good Talk,
Tom

Sunday, December 19, 2010

Brad Burnham and USV Weigh in on Net Neutrality

I’ve written quite a bit about net neutrality over the last couple of months. I’ve learned a lot about the issue while reading and writing about it. I’ve gotten a few things right, and a couple wrong, but hopefully I’ve at least highlighted the importance of the issue. This evening I was reading a few of the blogs I regularly read when I came across a post by Brad Burnham from Union Square Ventures that talks specifically about the changes the FCC will take up on December 21. The crux of Brad’s proposal is based on language suggested by Barbara van Schewick, a professor at the Stanford Law School:


A non-discrimination rule that bans all application-specific discrimination (i.e. discrimination based on applications or classes of applications), but allows application-agnostic discrimination.
The great thing about this approach is that it seems to answer the big objection of the ISPs to any net neutrality rule. ISPs can still regulate bandwidth to protect the health of their infrastructure. They can still throttle networks to handle demand, and they can still charge more for faster/better service. But they can not discriminate based on the application or type of application. In other words, an ISP could not slow only video traffic or any iTunes traffic. This should – and I emphasize “should” – prevent ISPs from stifling start ups and new innovation.


I encourage you to read Brad’s post in it’s entirety here. You can also see a video of Barbara van Schewick talking about net neutrality here. (warning: it’s almost 2 hours long). I would love to hear other thoughts on this proposed idea. It makes a lot of sense to me, but, like any proposed rules, there are sure to unintended consequences that I’m not seeing.

Good Talk,
Tom

Wednesday, December 15, 2010

YouTube Offers Paid Rentals? Will It Work?

I’m apparently a bit behind the curve on this one. I was just messing around on YouTube (it’s 1AM, what else should I be doing?) when I noticed the sidebar offered the ability to rent Reservoir Dogs. I was pretty shocked, as I’d never before considered paying for something on YouTube and didn’t realize they were experimenting with different revenue streams. So, I did what anyone in my generation would do: I Googled it. (Side note: Firefox does not recognize Google as a verb…I feel like that should change).

Well, it turns out that this little experiment started a couple years ago (apparently quite quietly). Google opened a store at www.YouTube.com/store that is accessible only in the United States. YouTube offers users the opportunity to rent movies for 24 hours for a fee of anywhere from $1.99 to $4.99 (at least that’s what I saw in the few minutes I spent on the site). So, while this was news to me, it wasn’t exactly shocking. It’s been clear for a long time that if Google hoped to make any money off it’s YouTube deal it would need to find new ways to monetize content and new streams of revenue. This appears to be experiment aimed at doing just that.

So, what do I think: First off, the site navigation is TERRIBLE. In fact, the entire customer experience (at least up to the point of sale, where I stopped) is aweful. Navigation is just too complicated. Filtering sucks, the search feature is not good (this is a Google company, remember), there’s no customer reviews, no suggestion/recommendation engine, no favorites, there’s preview available. It just strikes me as what an online movie rental site would look like it was launched in 1999. From same people that brought you Gmail and, more importantly, the original YouTube, this is hugely disappointing.

I’m wondering if part of the problem is the way we shop for movies vs the way we (and by “we” I mean “I”) use YouTube. For me, most of the fun of YouTube is the serendipitous discovery made possible by clicking through link after link of the suggestions panel. Beyond the video that caused me to go to YouTube in the first place, I never have an expectation to what I might find. This is completely different from the way I search for a movie to watch. While I might not know the exact movie, or even the movie type, I don’t want to allow serendipity to play as large a role. When I’m committing a couple hours and a couple bucks, the decision process is much different that when I’m investing no money and only a few minutes. In other words, I’m more risk averse and thus in need of better guidance from the tools employed to help me out. So that’s a big reason why I think the interface/customer experience feels so wrong for the YouTube store.

I also think it’s a tough market for YouTube to break into. Between Netflix, Redbox, your local video store, set-top boxes, Hulu etc I have a hard time seeing a niche for a YouTube store. (I’ll admit that Google TV could be a game changer, but I’m not familiar enough with it to comment, so I’m going to pretend it doesn’t exist…) When I want to rent a movie I go to Netflix and watch one. Or, if I’m not a Netflix subscriber (or they don’t stream the movie I want) I open up iTunes and rent it there. Given that my top of mind impression of YouTube (from years of experience) is grainy, shaky, homemade videos of people doing ridiculous things, I just don’t see myself ever using the service. (not to mention the buffering of flash videos compared to silverlight). I can’t imagine a time when I’d skip Netflix and iTunes (and Hulu and Comcast on Demand) for the YouTube store. Also, I’ve heard (can’t confirm or find a reputable enough source to quote) that the average age of YouTube users in under 16. These are people that have grown up on free video, torrent sites, etc. Add to the fact that they don’t have credit cards and seems like a tough sell. Seems unlikely that they will suddenly decide to pay for content. Lastly, the online video on demand market is still comparably small.

Now, the other side of the coin: GoogleTv is coming. Getting videos off the computer screen and onto the TV could make a huge difference (something Netflix noted when it rolled out streaming to your TV). I think with a major site redesign, lots of help from Google TV, the ability to stream onto an iPad (maybe wit 3/4G), YouTube could turn this into a winner. With YouTube revenue estimates ranging between $450Million and $1Billion this year, the company can afford to support the store until it catches on.

While I’m two years late to notice, I will be watching to see how this plays out. Until then, I’m keeping my Netflix subscription.
Good Talk,
Tom

Monday, December 13, 2010

Video: Leading with IT

Below is an interesting video of an MIT class being led through a case study. It's a long video, but Q&A starts at minute 22. Personally, I think it's worth watching as it touches on a number of important points relevant to IT and business professionals, not the least of which is to understand what hidden resources you have available and how technology can help capitalize/cash in on these resources.

Cloud Computing, Flexibility, Security and Wikileaks

I read two interesting articles in the past 10 days that highlight the security benefits corporations can gain by embracing the cloud. It's somewhat ironic how the first article involved Wikileaks, the infamous website known best for leaking classified US and foreign government documents, seeking refuge in the cloud by moving onto Amazon's E2C cloud computing service while the second article explains how Amazon's ability to scale was instrumental in protecting it from attack by supporters of Wikileaks.

Some back story might be in order: On November 28, 2010 Wikileaks, already well known for leaking classified cables of Foreign governments, released the first in a series of leaks or US government documents. There was a near immediate uproar in America and two clear sides emerged rapidly. On one side were those who believed the leaking was at a minimum, harmful to the US and in poor taste, and many even going so far as to call it treasonous. On the other side of the issue were those who defended Wikileaks and who believe that open dissemination of information will ultimately lead to a healthier society.

As the battle raged in the media, hackers tried taking matters into their own hands. Within a matter of days Wikileaks servers were overwhelmed by a Distributed Denial of Service (DDOS) attack. Wikileaks quickly moved there server a couple times, but each time were taken offline by the DDOS attack. Eventually Wikileaks settled on Amazon's E2C cloud computer service. The attacks against this new host were entirely unsuccessful as Amazon was able to add capacity rapidly to counteract the spike in traffic from the DDOS attack. On December 2, 2010, however, Amazon, bowing to public pressure and the threat of a boycott, removed Wikileaks from their servers and refused to host the site.  (Around the same time, MasterCard and Visa froze Wikileaks accounts and stopped processing donations/payments to the company).

In response to Wikileaks being booted from Amazon servers, hackers who supported Wikileaks embarked upon a campaign of retribution. On December 8, 2010, Visa and Mastercard websites were attacked and temporarily taken offline by supports of Wikileaks in response to those companies ceasing to work with Wikileaks. Amazon was similarly attacked, however, the web giant was largely unaffected. One anonymous hacker tweeted "We can not attach Amazon, currently. The previous schedule was to do so, but we don’t have enough forces." It seems that no matter how much traffic they sent to Amazon's servers, the company was able to respond with additional capacity to counter the attack. 

This episode shows the incredible flexibility available with cloud computing (as well as Amazon E2C's resilience).  As more companies move into the cloud, I would expect to see the effectiveness of DDOS attacks abate somewhat.

Good Talk,
Tom 

[Sources: http://centerstance.wordpress.com/2010/12/11/cloud-computing-security-and-amazon-why-elasticity-matters/, http://www.infoworld.com/d/cloud-computing/can-cloud-computing-save-you-ddos-attacks-306, Wikipedia.com]

Sunday, December 12, 2010

Open Platform vs. Closed Platform - Mobile OS

I'll start by saying that Fred Wilson of Union Square Ventures inspired this post when he wrote the following:

"Think RIM is going to struggle more and more every day. Moves like they are making against Kik, which provides cross platform BBM, are likely to come back to haunt them. They should be making it easier for their users to chat with iPhone and Android users, not harder. Open platforms win and closed platforms die. And RIM still does not get what being an open platform means."

Fred is a very influential tech venture capitalist who has backed companies like Twitter, Foursquare, Tumblr, and Zynga long before they became household names. When he has an opinion on the future of mobile technologies people sit up and take notice.

I blogged exactly 11 months ago about where I thought the mobile application future was headed (another post inspired by Union Square Ventures) and I wrote then that "I honestly can not see anything other than an open development standard emerging (and I’m loath to bet against Google), but I’m excited to see this all play out." A mere 11 months later it is still too soon to tell if I (and Fred and Brad) are right, but the numbers are starting to bear us out. Currently Google's share of the smart phone OS market is growing at 6.5% per quarter. RIM (blackberry) lost 3.5% in the last quarter and Apple (iPhone) stayed relatively stagnant at +0.8%. If this trend continues I imagine Apple and RIM, both closed platforms, will find themselves significantly trailing Google, an open platform, within the next 5 years. (Obviously, I think the iPhone will get a rather large, though temporary, bump when it is finally released on Verizon - especially if it is a 4G/LTE version. However, I think the long term trend will remain unchanged.

I'm still excited by coming developments in mobile technology (LTE on Verizon anyone?) as our phones and networks get ever faster and more powerful. And, I'm willing to bet that as the technology improves developers are going to be less willing to put up with stringent controls on the distribution of their work.

Maybe I'll re-visit this post in another 11 months.

Good Talk,
Tom

US CIO Announces New IT Policy

The CIO of the United States Government released a detailed 25 point plan on December 9, 2010 that aims to restructure the federal government IT by creating a "cloud first" policy as well as reducing the number of federal government data centers by 800. While nearly everything in the document  has been talked about by the government at one point or another over the last few years, this document really ties it all together and seems to create a path toward an actionable plan.

Two key points that stood out to me were the adoption of a cloud-first policy and the consolidation of data centers.

The section on the "cloud-first" policy talks about a web-based video company (I'm not sure the exact company, but think YouTube or the like) and how it was able to scale from an initial demand of 25,000 customers to 250,000 within three days and ultimately to 20,000 new customers per hour. In contrast the government run cash for clunkers program (officially called Car Allowance and Rebate System) had poorly scalable IT infrastructure and when demand exceeded estimates, the system crashed repeatedly. It took more than a month for architects and developers to stabilize the environments.  The document states that cloud technologies are economical, flexible and fast. The plan calls for each federal agency to identify three "must-move" services within the next 3 months, and then actually move all three to the cloud within the next 18 months.

The second initiative that caught my attention (actually the first one in the plan - I'm summarizing out of order), was the goal to eliminate 800  data centers by 2015. The document lays out 3 key steps to do so:

1)Identify program managers at the agencies to lead the consolidation effort
2)Launch a data center consolidation task force
3)Create a publicly available data center dashboard to track the consolidation progress

As I've written about before, data center consolidations can be tricky and I think point number one is going to be the most critical. Strong program managers are essential to the success of the plan. If all goes well, the government can save a lot of money by eliminating so many data centers (I'll leave a discussion about the social effects of the resultant job cuts to someone else).

The entire 25 point plan is worth reading (it's only about 40 pages) and can be found here:25-Point-Implementation-Plan-To-Reform-Federal-IT

I'm glad to see some quality discussion about how the government can adapt IT best practices. Hopefully quality action will follow.

Good Talk,
Tom

Friday, December 10, 2010

System Access: Adding, Deleting, Maintaining User Profiles

I recently started on a new project at work. I'm in a fairly non-technical role on a technical project. I've been there just over a week now and I finally feel like I understand what's going on, where I fit in, and where the project is headed (for better or worse). What's sad, is that even after 11 days, I can not (officially) log onto my clients computer systems. I have no badge. I've been roaming the building for days, but I officially do not exist.

This is an example of poor access controls. Despite having no log-in of my own, I've spent hours navigating around their network and learning what I need to learn (that might sound hacker-ish....It's not meant to. The project I'm working on uses a SharePoint site and a few other cloud-based tools for tracking an collaboration). Despite having no badge or key card of my own, I spend days navigating around their buildings (plural). This got me thinking today about how IT departments handle access issues. Anyone who works as a consultant regularly deals with getting acclimated to new cities and getting access to new client systems. But, for reasons I don't fully understand, some clients are extremely proficient at the task of managing users and others are not. I've decided it mainly comes down how company leadership balances security policy with a need to get things done.

The utility companies I've worked at are, for good reason, extremely focused on security. It's near impossible to enter a building or log onto a system without a badge or a system profile. Most of the buildings I've been in have gates in the lobby that alarm if you don't swipe your card, they have elevators that require a key card to operate, they have uniformed security in fixed and rotating positions. Additionally the systems are configured so that a user can only be signed in from one location at a time (no sharing of log-ins), that accounts will lock out if the password is entered incorrectly too many times, etc. They can not be accused of being lax on security. At the same time, I've never waited more than a few days for all the access I need. I think in part this is due to the fact that Utilities operate in a regulated environment with  tightly controlled costs and margins. Any delays can adversely affect profitability and costs which in turn can annoy powerful regulators.

I can contrast this with a media company I worked at that seemed mostly unconcerned with security. On day one I was handed a plain, unremarkable Kastle card and a form for access to the building. I was told to try to turn the form in by the end of the first week, but that the Kastle card was already active and I had full access to any floors I needed (mind you, my client only had offices on two floors). When I met my client sponsor upstairs (not in the lobby), he gave me a user name and password from a former consultant who had left a few weeks earlier. He said it would get me up and running until my paperwork was processed. The upside is that I was working full-bore on day one. The downside is that IT security was nearly non-existent.I mean, I was using the log-in of a guy who'd been gone for 3 weeks...why did it even still work? These things should be turned off right away.

Then I've worked at government agencies. These guys take security to a whole new level. I had to go to the client site two weeks before my start date to pick up paper work, get finger printed, and sign some forms. Then, on day one I was given a guest pass and an escort. These two things stayed with me for two weeks. I could not carry my own laptop into the building until it was certified as virus/malware free by IT and was given a special tag to denote it as "non-government property". I was required to leave my blackberry in the car because it had a camera (I eventually bought a new one with no camera). Needless to say, for the first couple of weeks my productivity was less than optimal (though my bill rate was pretty good!). Because of the sensitivity of the agency I was working with and the nature of the material I was handling, security was a vastly higher priority than cost or profitability.

What it all comes down to is finding the right balance of security vs profitability for your company. For a highly sensitive government agency, it makes sense that efficiency would take a back seat to security. For a small media company it might be OK to relax a bit on security. And for a utility company, it's probably right that they fall somewhere in between.

A few tips to get up and working as fast as possibly:

1)Start the process and paperwork as soon as possible. If you know weeks ahead of time that you're going to start on XYZ day, call your client and find out what can happen ahead of time. Sometimes providing a name and DOB can really get the ball rolling. Likewise, if you're a buyer, reach out to your consultants and request the necessary information. There is no reason to wait until the consultant is on the ground (charging your company)

2) Do not give out your password. I don't know how my client got the old consultants log-in information, but make sure to protect your own. Change passwords regularly, do not use the same password for everything and never give it out. If you're a buyer, delete access as soon as possible after someone leaves, require password changes frequently, and never encourage password sharing.

3) Grant only the access needed. If someone only needs access to one system, provide only that access. If they only need access to one floor in a building, do not give them access to all the floors. And if you are a consultant (or an employee even), don't attempt to access things you don't need. This will only open you up to liability.

Good Talk,
Tom

Thursday, December 9, 2010

Vendor Management

When you work in IT it's a fact of life that much of your time will be spent working with vendors and consultants. The field has become extremely specialized and it often makes sense to work with specialists who have years of experience mastering the issue that you are facing for the first time. Oftentimes a group of skilled contractors can design, build, implement, test, or upgrade a system faster and cheaper than a companies own resources could. (There is a reason IBM moved out of making computers and into consulting). It often makes perfect sense to outsource certain problems (in fact, sometimes it's the only way).

However, working with contractors comes with its own set of risks. While you generally get skilled resources who can be on-site and up to speed very quickly, you can end up surrendering a lot of control over the project(s). If seen first hand contractors differ greatly with their clients over how the project should proceed or what the time line needs to be. That's why it is important that, from the very beginning, buyers carefully develop and manage the relationship with the consultants/vendors. There are two primary steps to include in this management process that happen BEFORE the vendor starts working:
Pomerene & Redfield (Courtesy of LOC)

1. Vet the Contractor
Oftentimes buyers are far less familiar with the process than the vendors are (a buyer may sign one major IT project in his career, whereas consultants will sign 5-10 every year) and thus they rely heavily on advise from the very people selling them the services. There is nothing necessarily wrong with this - a big part of why companies pay high fees is for the consultative advice. But, a buyer needs to be sure he vets the conclusions of potential vendors with others in the industry.
  • Ask for references, call (or better yet visit) other clients of the firm you are considering.
  • Talk to other consulting firms currently working at your company - You'll get sometimes biased opinions, but at least you'll hear the other side
  • Talk to former clients -find out what went well, what went wrong, etc
2.  Vet the Contract
Despite the promises of the salesman (or saleswoman) you've been talking to for the last 6 weeks, the only thing that matters once the project starts is what the contract (also called a statement of work) says. You may have been told that the company can do XYZ for you, but if you don't write it in, you can rest assured that it's either not getting done or it's going to cost you extra.
  • Lay out a specific time line
  • Include penalties for non-performance/missed deadlines
  • Define scope as clearly as possible - what will the contractors do, what will the client be required to provide for them to do it, what is specifically excluded under the contract?
  • Define who ultimately owns what - vendors/consultants defend their Intellectual property/capital rights aggressively. Make sure to spell out what work papers, processes, tools you will own when the project is over. 
Obviously, there is a lot more to negotiating with a vendor than just what I've laid out above, but you would be surprised by how many companies fail to do even this little bit. Too often the big name vendors come and buyers just trust that they know what they're doing. Or, you'll get a really strong first contract, then each successive add-on project starts with a weaker and weaker contract. Ultimately, you'll end up spending millions of dollars a year while hearing excuse after excuse why the project is not done. If you don't have specific penalties/separation criteria in the contract and something goes wrong you'll be left in a really tough spot.

 Good Talk,
Tom

Sunday, December 5, 2010

Data Center Program Management


Data center migration programs require coordination across multiple departs/divisions within a company and can be incredibly complex. Additionally, such programs typically involve millions of dollars and the company's most critical applications. As such it is important that these programs are well thought, well planned, flawlessly executed, and rigorously monitored. Below I offer five tips to help ensure successful program management.

1) Use Tools that Work: 
 While "one-size fits all" might work well for hats or gloves, it is not a good idea in Data Center Migration program management. I've seen some clients with incredibly over-engineered tools and processes. Others have no standards at all. What's important is not the complexity or simplicity of the tools, it is, rather, to ensure that the tools fit the culture and capabilities of the company and that the meet the data transparency needs of the program. Some companies might be OK with basic excel spreadsheets, but would be overwhelmed by complex, custom built PM tools. Other companies are going to require something more involved and will be comfortable adapting to a new tool. Companies should understand that there are not one-size fits all tools, but only one size fits now tools. 



2) Adapt Best Practices
It is critical that any Project Management Office (PMO) adapts as the project moves forward. As more information becomes known and lessons from early successes and failures are learned, the project managers must discard broken processes and ineffective tools, while promoting those that work well. Additionally, and equally as important, the PMO must also search for best practices wherever they exist. We all bring best practices from our own experience, but a great PMO will look outside the company and will look to other departments and programs within the company. A great PMO will get better as the project progresses. 



3) Engage Key Leadership
I really probably should put this first, as it is by far the most important factor in project success or failure. I don't mean a strong project manager when I say engaged leadership, I mean the project must have an active executive within the organization who checks on the project regularly, holds the project team accountable for deadlines and goals, and champions the program to others within the company - removing roadblocks and opening avenues for collaboration. Additionally, a engaged executive can often expedite key decisions. 


4) Measure, Measure, Measure
It is an old cliché that "what gets measured gets done", but it's a persistent cliché for a reason. It's true. When designing program plans, it is essential to ensure that the proper metrics are created and the proper deadlines set.  Equally as important, these deadlines and metrics, as well as progress against them, must be clearly and regularly communicated to the project team members. 


5) Know What You Are Migrating
photo by Lawrence Berkley National Laboratory (Flickr Creative Commons)
When migrating from one data center to another, it is inevitable that servers and applications are going to come offline, sometimes for hours at a time. The important factor is to make sure the outage is well publicized to all stakeholders. Oftentimes the applications on a server impacts users far beyond the core project team, the IT department, and even the office location. The project team must identify EVERY application on the server (as well as all storage files, back ups, etc. I can tell you from experience, that failing to identify even one of these applications can have catastrophic consequences.  This is definitely one of those situations where you need to check, double-check, then check again. Make sure you know what you're moving. 


While this list is by no means comprehensive. Data center migration program management is tremendously complex and could hardly be covered 30 blog posts, let alone one. But keeping these 5 points in mind will minimize the risk for the program. 


Good Talk,
Tom

Saturday, December 4, 2010

E-Health Care

I came across a great satirical video regarding the sorry state of technology in our health care system. If Air Travel worked like health care...that's a scary thought.



Please note:  I'm reposting this video here under fair use standards. All rights remain with the original creators.

Google and Groupon - Who is Crazier?

I was just reading that Google offered to buy the social-commerce site Groupon.com for around $6 Billion…at first I swore the number must have been a typo. Google must be crazy to pay that much. Then I came across another article reporting that Groupon was going to reject the Google bid and remain an independent company. Right now my head is spinning. I can’t decide who’s crazier. $6 Billion seems like way too much money for a website/company whose sole source of revenue is derived from small business.  Furthermore, according to nearly all reports I’ve read (which I will freely admit is not a comprehensive sample), nearly half of the businesses that have tried Groupon say they will not use the service again.


There is a fundamental issue with Groupon from the perspective of a small business. While the exposure and influx of new customers might seem like a positive in the short-term, one must consider the long-term cost. First off all, the business is discounting their respective product or service by 50% or more. On top of that, they need to pay Groupon 50% of the revenue they earn from the promotion. In essence, the company is giving up 75% of their revenue, destroying margins and eliminating profitability (except in a few select industries with extraordinary mark-ups).


Now, you could argue that it’s ok to lose money on a promotion as long as the company is gaining customers and setting the stage for long-term growth and profitability. Unfortunately, that is not what happens in an overwhelming majority of the cases. As has been demonstrated by academic theory and real-world study, price-promotions generally harm long-term brand value (see here). In short, value consumers rarely become brand loyal consumers. They will hop from one phenomenal deal to the next seeking steep discounts along the way. These are not the kinds of consumers on whom you want to base your business.

Groupon is currently enjoying extraordinary success. The founders should be extremely proud of the business they have built and of the impact they have had on group buying and social media. Each day more and more consumers sign-up and take part in the daily Groupon.  However, Google must recognize that the customers that Groupon needs to worry about are the small businesses. As the group coupon/group buying space matures competitors will emerge and small business will get smarter. I have serious doubts that the business can scale enough to cover a $6 Billion purchase price. I’d love to hear other thoughts. Who is crazier? Or are they both perfectly sane?


Good Talk,
Tom

Wednesday, December 1, 2010

Electric Cars and Electric Rates - What Policy/Rate changes are coming?

 I've spent a lot of time thinking about the future of our electric grid and how it will change with the advent of a truly smart grid as well as the widespread adoption of electric cars. One of the biggest constraints of electricity generation and delivery is that we can not (to date) effectively store electricity. We either use it or lose it.


We're currently working on a smart grid that will allow energy to flow from the utility company to the consumer, and also the other way. This will allow those with wind mills or solar panels to sell energy back to the grid.


Electric cars rely on huge batteries to charge at night and then power the car during the daily commute. One of the really exciting things about widespread adoption (should it occur) is that we will have the largest collection of batteries in the history of man. If you charge your car fully over night (when electricity is cheaper), then drive a few miles to work you're not going to deplete the whole battery. There will be a lot of stored charge.


Imagine being able to plug in at work and sell that electricity back to the utility at peak rates (after having paid off-peak rates to charge it over-night). For this to work, however, you will need a personal utility account (like a debit card). Otherwise, your employer would get the credit (since it's probably his plug).


There are, of course, other uses: What if you get to a friend's house for dinner and your car is running low? Should your friend have to pay for you to charge your car? Of course not!


We will need to separate the idea of a utility meter from a house/office. Currently utilities track premises. In general there is one premise per home. I think in the future it might look more like one premise per person.


There are also other options utility companies are looking at. According to a recent Wall St Journal article, some utilities are offering a "all you can eat" rate plan specificially for charging electric cars (Detroit-Edison), others are offering lower rates and free charging stations. This does not solve the problem of charging away from home, but it can provide incentive to move to an electric car. The same article suggests that someone driving a Nissan Leaf 100 miles a day (which admittedly seems like a lot) can save more $350 a month compared to driving a traditional internal combustion car getting 25MPG.


 I think the real interesting question is "how will governments respond to electric cars?" I imagine we'll see some free/heavily discounted public charging stations at least initially. Just as cities/states provide roads for our cars, I think it's reasonable that they will provide power to speed the adoption of electric vehicles (and by "free", of course I mean tax payer funded...)


 I'll be interested to see what other policy changes emerge as a result of electric cars.


Good Talk,
Tom


[Source: http://online.wsj.com/article/SB10001424052748703882404575519641915241922.html]

Tuesday, November 30, 2010

IT and Energy - Implications for Economics, Environment and National Security

The Lawrence Berkley National Laboratory estimates the cost of power outages in the United States at over $80,000,000,000 (that's $80 Billion) each year. The major outage in 2003 that blacked out large portions of the North East is estimated to have cost the city of New York over $1 Billion (or $36 Million per hour) and has an estimated overall cost of over $6 Billion (these figures include direct costs and indirect costs such as loss of inventory due to spoilages, lost business revenue etc). Clearly our aging power grid poses real risks to our economy.  Given our reliance on computers (which require electricity) for everything from war fighting to financial transactions to public transportation, it's also easy to see that our aging infrastructure poses a real threat to national security. Our government has already discovered concrete plans by terrorist organizations to target our electricity grid.

So how can IT help? Broadband technologies coupled with cutting edge demand management software, outage monitoring sensors and improved data storage &a analysis can improve both the cost effectiveness and security of our energy grid. Currently the grid is a one way street. Energy flows from one of the thousands of utility companies to one of the millions of homes and business across the US. Each month the usage of that home or business is tabulated/read (either over a network or by a meter-reader) and a bill is prepared. When an outage happens, the utility is slow to find out and slow to respond.

In the future, a smart grid can help in all these areas. A smart grid turns our one way street into a two-way mutli-lane super highway. Electricity flows from the utility to the consumer, but also can flow from the consumer back to the grid (think solar panels, windmills, and electric car batteries) allowing for more effective peak demand management and creative revenue models.

Demand management software can keep the generation of electricity optimized. Currently, peak demand (or critical peak demand) requires utility companies to purchase expense electricity on the spot market or to fire up old, outdated, expensive, and heavily polluting plants that are kept only for emergencies. New software can help both customers and utilities manage demand by issuing load control commands (imagine the utility remotely turning off your air conditioner) and voluntary demand incentives (imagine getting a text message asking you to turn off your A/C in exchange for a credit on your bill). All of this can be automated and optimized.

Remote sensors along the grid can provide real-time information to the utilities companies about the health  of the grid. The 2003 blackout was caused when one of the high-capacity lines in Ohio became over-taxed, causing it to heat up and sag (as the lines carry more power they get hotter which causes them to expand and sag). This particular line sagged too close to an untrimmed tree resulting in a "flashover" that cause and ambient surge. Ultimately the surge caused a cascading blackout affecting millions. A remote sensor on this part of the grid could have alerted the local utility (in this case FirstEnergy Corporation) and the line could have been shut down or throttled. (I'm intentionally ignoring the fact that better landscaping could also have prevented the blackout).

Finally, better data management through the smart grid can mitigate the effect of outages and reduce their overall impact and cost. Outage Management Systems (OMS) can proactively monitor a company's network and take action in the event of an outage. This will decrease response time and scope, leading to faster restoration of service.

IT is needed every step of the way to bring about a future smart grid. With over 3,000 utility companies in the United States interoperability standards will need to be developed (my guess is IP, but it could be something entirely different). Additionally, technologies for the consumer will need to be developed. I've had the privilege of testing some of the most cutting edge load control devices, thermostats, and in-home displays, and the information they provide will truly change consumer behavior. This provides a win for our economy, a win for the environment and a win for national security.

The National Broadband Plan, the 2008 stimulus bill, and other legislation has already highlighted the benefits of a smart grid and provided some early funding. Large scale programs are underway in TX, CA, FL and other states and soon, the technologies will be deployed in CT. The future is uncertain, the potential is great.

Good Talk,
Tom

Tuesday, November 16, 2010

Health Care and IT - National Broadband Plan

Recently the federal government released something called the National Broadband Plan that lays out the framework for a national effort to ensure that, as a country, we are making the most of available broadband technology. One of the challenges the National Broadband Plan hopes to tackle is the synergies between health care and IT. The plan is comprehensive in nature and lays out 5 key elements that will allow our health care providers to utilize current and future IT trends to provide better, cheaper, and more efficient care.

Ensure Access to Affordable Broadband by Rural Health Care Providers - the plan suggests that the FCC make use around $400Million in annual funds already authorized. The concern is that commercially available broadband is often priced too high to be affordable, or is simply insufficient to support modern health care needs.

Create Economic Incentives to Encourage Broad Adoption of IT and IT innovation - this tenant of the plan aims to increase the use of innovative IT products to improve the overall health care system. It supports Electronic Health Records and E-care. This part of the pan recognizes that broadband connectivity alone is not sufficient to get the most out of the current and future IT environment.

Use New Techniques in Data Analytics - Given the mass amounts of data generated and stored by modern IT systems, a move to broader adoption of health care IT will allow providers to make use of advanced analytics techniques. Spotting patterns in data may lead to better treatment and new cures.

Revise Standards around Licensing, Credentialing and Privileging - Many of these rules were written in the 20th century in a time before broadband and the technology of today. The National Broadband Plan aims to re-write these rules to speed the adoption of E-Care.

Protect Privacy - This is probably the biggest issue in Health Care IT. We will not see widespread adoption of electronic health records, E-Care, or health care IT until providers and patients are comfortable with the standards of privacy. In a time when we expose more and more information about ourselves online, most people are still not willing to share their medical information with anyone other than their provider and their insurance company.

These are the five major points of the National Broadband Plan as it relates to Health Care IT. In other posts I will examine the plan as it relates to other areas of national interest.

Good Talk,
Tom

Monday, November 15, 2010

IBM's Decision to Adopt Firefox as its Default Internal Browser



If you read my recent post about corporate IT policy (and my employers refusal to allow Chrome) you'll know that I'm not a big fan of Microsoft Internet Explorer. I find it to be slow and somewhat prone to security holes. I found it interesting that back in July IBM made the decision to use Mozilla's Firefox as its default web browser for employee computer. In the announcement Bob Suter, IBM's vice president of Linux and open source software, called out five key reasons for the move:

                        
    1. Firefox is stunningly standards compliant, and interoperability via open standards is key to IBM’s strategy.
    2. Firefox is open source and its development schedule is managed by a development community not beholden to one commercial entity.
    3. Firefox is secure and an international community of experts continues to develop and maintain it.
    4. Firefox is extensible and can be customized for particular applications and organizations, like IBM.
    5. Firefox is innovative and has forced the hand of browsers that came before and after it to add and improve speed and function.
The move seems to make sense for IBM, but what does it mean to the larger business community? In other words, why is this important?

First, it means approximately 400,000 employees of IBM will now automatically have Firefox installed on their machines. This is a fairly significant number of users and demonstrates that Firefox can be an appropriate enterprise-wide solution for web browsing (and that there are alternatives to IE).

Secondly, it is a big win for open source standards. Mozilla has kept Firefox steadfastly open-sourced with excellent documentation and wonderful security. As more and more applications move to the cloud we are all going to come to rely on open source standards both for security as well as interoperability and ease of use. IBM's adoption of Firefox is a big step in that direction.

Ultimately this move should benefit IBM employees with faster browsing and a better online experience. But, I think the move benefits all of us as this "endorsement" should spur interest and faith in open source solutions for other enterprise tools.

Good Talk,
Tom

Source: http://www.sutor.com/c/2010/07/ibm-moving-to-firefox-as-default-browser/
Image: http://www.livbit.com/article/2010/07/01/ibm-decides-firefox-as-its-default-web-browser/

New IT policy for the United States Army

Army CIO/G-6 Strat Communication

The Army is rarely thought of as an organization relying on IT. We're more apt to associate the Army with mortars and guns and basic training, but the truth is that today's Army is a huge purchaser and user of Information Technology. Whether it's campaign coordination by the generals or real-time data updates to the soldiers on the ground, the increasing use of IT has become a definite advantage in the Army's war fighting ability.

Recently the LT. Gen Jeff Sorenson, the Army's Chief Information Officer, put out a memo announcing the Army's new policy on procurement and implementation of software solutions. The focus is around ensuring compatibility of systems across all Army units. The memo spells out a Common Operating Environment (COE) for the entire US Army. The memo states "Implementation of COE will decrease the time it takes to deliver relevant applications to the war fighters who need them, and decrease the cost of doing so."

The COE policy is part of a larger Army policy called Army Software Transformation (AST) that aims to move the Army's software procurement, implementation and management policies to the cutting edge and make our war fighters even more effective. Other aspects of AST include improved email (a move to Exchange Server), Active Directory, Enterprise Network Operations, Data Center Consolidation, and a move toward Agile development.

From General George Casey, US Army Chief of Staff: "We're building an Army that is a versatile mix of tailorable and networked organizations operating on a rational basis...to provide a sustained flow of trained and ready forces for full spectrum operations...and to hedge against unexpected contingencies...at a tempo that is predictable and and sustainable for our all-volunteer forces".

The new IT policies are helping to improve today's Army and make our soldiers more effective in fighting the new kinds of conflicts we're seeing in Afghanistan, Iraq and around the world. I'm confident our soldiers can rise to the occasion and create a world class IT organization.

Good Talk,
Tom

Sources:
http://www.federalnewsradio.com/index.php?nid=35&sid=2116410
http://ciog6.army.mil/ArmyEnterpriseNetworkVision/tabid/79/Default.aspx
http://ciog6.army.mil/LinkClick.aspx?fileticket=j4DkCajsfGQ%3d&tabid=79

Sunday, November 14, 2010

Are Mobile Devices Creating a Problem for IT and HR?

Mobile smart phones are clearly one of the fastest growing technologies of the last decade. The ubiquitousness of Blackberries, iPhones, Droids, Evos and others have, in many ways, made life easier for everyone. We are now in near constant contact with clients, colleagues, friends, and family. From a productivity point of view this is a very good thing (we'll leave the psychological and emotional discussion until another time). We can be much responsive to the needs of those around us and we can be much better informed than at any time in the past. However, there is one issue that is beginning to creep up in companies around the world.

Most IT departments, especially at large companies, have a fairly well defined policy about the use of the companies' network resources and what types of behavior are permissible and what are prohibited. These policies vary from company to company and are enforced to vary degrees of effectiveness. I've worked with companies allow Facebook and YouTube at work, recognizing the needs of employees to take a mental break every now and then. I've worked at a company that allows only a predefined list sites necessary to conduct business (even sites like NYTIMES.com and Yahoo.com were blocked). And then I've worked at companies that fall somewhere in the middle (i.e. no business use for YouTube, but you might want to buy a gift for a boss/coworker on yahoo shopping). In all of these cases, my conversations with IT managers have made clear the policies are well thought out and consistent with the culture and values of the company. All offensive content is blocked at 99% of the companies I've worked with (1 had no web-filter at all and a very open culture).

But how do you deal with the little computers in our pockets that run on a cell phone signal? When employees can bring their own network to work, the risk to companies is much higher. Clearly, IT can manage the risk of viruses/malware to the network (after all, my Droid does not interact with my clients network very often). This risk is fairly straightforward and familiar to IT managers. But what about the risk of displaying offensive content? IT can't filter the browser on a privately owned smart phone. I'm sure we all know colleagues that have called up the latest YouTube sensation at lunch or on a break. I'm a sure a couple people even know colleagues that have shown pornography at work (thankfully, I've never been in that situation). From an HR point of view the risk of a sexual harassment claim or a hostile work environment claim (think offensive jokes, videos, etc) is increased by the increasing presence of our smart phones. Additionally, most of these smart phones have cameras. There is a real risk of employees photographing confidential information.

So what is the answer? Honestly, there is no easy answer. I know a couple employers that ban cell phones inside all their buildings (mostly employers involved with classified government work). This seems like an extreme measure and for some industries (sales and consulting come to mind) would cripple your work force. If there is a simple technology solution, I'm not aware of it. I think the solution will be a mix of carefully enforced policy and a culture of respect in the workplace. I, for one, will be interested to see how it plays out in the future.

Good Talk,
Tom

Common Carriers and Net Neutrality (Part 9 in a multi-part series)

Until 2005 the United States imposed rules on common carriers (legacy telecom companies such as AT&T and Verizon) requiring them to sell bandwidth on their networks to other ISPs at discount prices. The idea was basically that the best way to ensure competition in the ISP space was to make incumbents sell their bandwidth to other ISPs at wholesale prices. This would allow the Earthlinks of the world to compete and, hopefully, create an environment where increased competition leads to lower prices and better service for the consumer. However, in 2005, the telecom companies successfully lobbied the FCC to change the rules.

The carriers argued that the rules unfairly penalized them simply because of the delivery mechanism they used. AT&T and Verizon delivered data services over old telephone wires. Because, as telephone companies, they were designated common carriers, the rules were different. Comcast, on the other hand, delivered data services over coaxial cable and was not subject to the FCC's rule about wholesaling bandwidth. In 2005, the SEC agreed with the legacy carriers and dropped the rule requiring discounted prices.  Some say this decision by the FCC provided a rallying point for the net neutrality movement.

Other countries such as South Korea and Japan have largely maintained similar bandwidth sharing rules to the pre-2005 rules in the United States. One can question whether this is the reason some of these countries have much wider broadband adoption, much faster broadband speeds, and much lower prices. Should the FCC have maintained the pre-2005 rules? Did the rule change spark the call for net neutrality? Should Comcast be regulated as a common carrier? All questions worth exploring.

Good Talk,
Tom

Corporate Policy - Security vs Usability

I'm taking a break from my series on net neutrality to discuss IT policy at the company level (as opposed to the national/international level). Recently my firm was acquired by a much larger company and over the last week or so we acquired employees have been working through the elements of the transition. While both companies have been excellent in communicating changes and keeping a positive attitude throughout, there have definitely been some "hiccups".

As a consultants our primary concern is always client service. The merger, to the say the least, has been a distraction in serving our clients. We push through, but there is a noticeable impact to our clients as our time and effort is divided between their work and our own transition. This impact has been exacerbated by a slew of technology changes we must handle. We've gone from using Microsoft Outlook, a simple, easy to use and functional email program to Lotus Notes, a program without a major overhaul in 20 years. Our corporate IT policy does not allow Outlook and has stifled our creative efforts to use it (DAMO and other tools). Additionally, corporate policy mandates the use of Microsoft Internet Explorer and actively scans for installs of other browsers to delete them. Anyone who has used Chrome knows how much faster, easier and more intuitive it is. Lastly, we have an auto-backup program that backs up our machines on a daily basis. This is a great idea. However, the backup can not be scheduled person by person, it is randomized to efficiently utilize network resources. This can be a really big hassle when a consultant is using his/her machine to deliver a PowerPoint presentation and suddenly the backup utility kicks on and zaps the CPU/memory of the machine.

Each of these policies was set up with noble intentions. Outlook is not as secure as Lotus Notes and Notes has phenomenal back-end DB capabilities. IE is the corporate standard for browsers and minimizes compatibility risks. Regular backups are an essential part of any corporate risk mitigation/disaster recovery plan. However, the consequences of these policies have, perhaps, not been fully explored. Moving 600 consultants used to Outlook, Chrome, and on-demand backups to these new policies has severely impacted our ability to help our clients. We spend more time re-booting, more time fighting with our email and more time waiting for web pages to load. All this means we have less time (or less sleep) to help our clients. (This doesn't even include the consultants who were forced to give up their Macs for ThinkPads.

When designing IT policy (or integrating a new company with conflicting IT policies), it's important to keep the ultimate goal in mind. When policy or process impedes progress, a company must seriously re-evaluate. Now, it's been less than 2 weeks and I'm sure the learning curve is steep, but this a good example of the impact IT Policy can have on a business and it's customers.

Good Talk,
Tom

Friday, November 12, 2010

Real-Life Example of Net Neutrality Debate (Part 8 of a multi-part series)

I'm taking this opportunity to underscore the real world implications of the net neutrality debate. So far I've only talked about the academic/theoretical implications, but the truth is, the debate and it's outcome will have an impact on all of us. On October 16th, that impact was felt across the New York region.

If you were a Cablevision subscriber and happened to visit Hulu on Friday 10/16 to watch something from Fox you would have received a the following message:



Fox (and it’s owner News Corp) and Cablevision are in a dispute over carriage fees for Fox’s signal on Cablevision’s network. These kinds of disputes are more and more common in the world of cable television, but this is the first time I can think of that it has impacted Internet users. Net Neutrality regulation would have allowed Cable Vision subscribers to access Fox’s content on the web.  The dispute between the two companies has been ongoing for a while now, however, the “negotiations” are clearing escalating. With the MLB playoffs and World Series and the Sunday NFL season on Fox, the stakes are clearly high. Fox and Cablevision ultimately reach a deal on October 18th, in time for 3 Million New Yorkers to catch game three of the world series.

Good Talk,
Tom

Rebuttal to the Business Argument Against Net Neutrality (Part 7 in a muli-part series)

In my last post I describe one argument that ISPs are using to combat net neutrality legislation. Namely, that tiered pricing is necessary to incentivize ISPs to continue investments in better infrastructure. As I stated, at first blush the argument seems logical. That by employing pricing leverage the ISPs can make more money and thus have more to invest in their networks. However, an astute student of economic will quickly see the logical fallacy of such an argument.

Increased pricing power leading to higher profits in no relates to capital expenditures or investment on the part of the ISP. In fact, their is economic theory to suggest that exactly the opposite is true. Let me explain why: When the ISP can have multiple pricing regimes to support discriminatory pricing, and the ISP needs/wants to increase profits, they simply increase the top rate (the tier one rate). Those companies/customers paying for tier one access clearly need priority treatment of their traffic and are thus willing to pay for it. This leads the ISP to higher profits.  In a non-tiered system, however, the ISP does not have the luxury of raising prices for only those tier 1 customers. If they raise prices for everyone, they risk alienating those who do not value priority. Thus, The lack of a premium pricing revenue stream may increase incentives to invest in greater infrastructure to serve greater, flat-rate volume. “ISP’s incentive to invest on capacity under a discriminatory network can be smaller than that under a neutral regime where such rent extraction effects do not exist”* This would actually seem to suggest that a neutral net policy would support greater investment in future infrastructure enhancements.

Good Talk,
Tom

*Choi, J. P., & Kim, B.-C. (2008). Net Neutrality and Investment Incentives, CESifo Working Paper No. 2390.

The Business/Economic Agrument Against Net Neutrality (Part 6 in a multi-part series)

Opponents of net neutrality (primarily Internet Service Providers (ISPs)) have argued that  any legislation forcing them treat all traffic equally and prohibit them from traffic shaping, throttling, two-sided pricing, or blocking of content would adversely affect the ISPs’ ability to invest in infrastructure improvements and new technologies, and would ultimately be detrimental to the end users Internet experience.  They argue that the ability to differentiate prices (tiered pricing) would incentivize ISPs to invest in improvements to their network and, conversely, that the inability to extract different rates from different types of content providers/users would disincentivize them from investing.  The fundamental economic argument is that the more pricing power a producer/provider has the more investment that producer/provider is willing to make in future goods and services.

At first blush this makes a lot of sense. The more pricing power a company has the more money it makes and the more capable it is of making future investments.  Warren Buffet is a huge proponent of the concept of pricing power in choosing investments (See’s Candies, American Express and Coca-Cola being but a few examples). A business with pricing power traditionally does very well in the markets place (as a counter-example look at newspapers in today’s market). However, we should be careful not to confuse financial success with economic incentive to increase business investment. In my next post I'll explain why tiered pricing (and pricing power) do not necessarily lead to more investment.

Good Talk,
Tom

Fifth Ammendment Arguement Against Net Neutrality (Part 5 in a multi-part series)

Opponents of net neutrality have also argued that such regulation would constitute an unlawful “taking” of property. The 5th Amendment to the United States Constitution allows for the taking of private property by the government only with “just compensation” to the owner of the property in question.  Opponents to net neutrality argue that the regulation, by forcing network owners to treat all content equally, deprives them of rightful ownership of these assets and, therefore, violates this clause.  By taking away the ISPs ability to refuse access to their network (content blocking) or mandating the conditions under which access is permissible, net neutrality violates the 5th amendment rights of the ISP.

The case law around this argument is open to some interpretation. In Kaiser Aetna vs United States, the court held that the right to exclude access to ones property is an essential element of property ownership. “one of the most essential sticks in the bundle of rights that are commonly characterized as property [is] the right to exclude others.”

However, other case law has been less clear when the matter concerns the FCC. As professor Rob Frieden note:

“While reviewing courts have questioned the nature, type and rates of the FCC mandated common carrier interconnection and facilities-leasing requirements, the judiciary has not deemed the requirements confiscatory: ‘There is no evidence that the decision to adopt TELRIC [i.e., compulsory pricing of local exchange service elements on the basis of quite low Total Element Long Run Incremental Cost] was arbitrary, opportunistic, or undertaken with a confiscatory purpose. Indeed, the indications in the record are very much to the contrary.’" *

The Fifth Amendment argument, while compelling to some strict constructionalist interpretation of the constitution, will probably have a difficult time advancing in the courts. I think ISPs are going to need to rely on something else if they hope to derail net neutrality legislation.

Good Talk,
Tom

* From footnote 57 of “ Internet 3.0: Identifying Problems and Solutions to the Network Neutrality Debate” Rob Frieden


First Ammendment Argument Against Net Neutrality (Part 4 in a multi-part series)

ISPs (and their lawyers) will also argue against net neutrality on two constitutional grounds (either of these arguments is stronger than the business argument). ISPs argue that net neutrality violates both their 1st and 5th amendment rights. Interestingly there is precedence within existing case law to support each of these arguments. In this post I will examine only the 1st Amendment argument.

An illustrative example of net neutrality legislation is the bill put forth by Senator Byron Dorgan entitled “The Internet Freedom Preservation ActThe act, introduced in the second session of the 109th Congress reads, in part,

"Each broadband service provider shall (1) not block, interfere with, discriminate against, impair, or degrade the ability of any person to use a broadband service to access, use, send, post, receive, or offer any lawful content, application, or service made available via the Internet” (Internet Freedom Preservation Act, S.2917 12(a)(1)(2006), 2006).

ISPs argue that net neutrality, specifically the provision that requires them to treat all traffic equally, violates their rights under the 1st amendment. The entire premise of net neutrality (both as a concept and as legislation) is that the Internet should be open and non-discriminatory. This noble goal, in practice, will require that ISPs (generally private companies), therefore, behave in a non-discriminatory manner. Taking this concept a step further, and we arrive at the logical conclusion that ISPs must treat the traffic of a competitor the same way they treat their own traffic. ISPs argue that compelling them to deliver a message is as much as a violation of their 1st Amendment rights as prevent them from speaking would be. In the past the courts have agreed. In Pacific Gas and Electric Company vs Public Utilities Commission the court held that ““For corporations as for individuals, the choice to speak includes within it the choice of what not to say.”

Many IPSs and conservative think tanks have latched on to this argument as to why net neutrality legislation should be defeated.  In the next post we'll look at another constitutional argument against net neutrality.

Good Talk,
Tom

Non-Neutral Network (Part 3 of a Multi-Part Series)

So we've talked a little about the policy debate around net neutrality and a little about how the Internet works today, but what would a non-neutral network policy involve? Primarily 3 things: Quality of Service, Traffic Shaping, and Pricing Models.

Quality of Service: Historically the Internet operating on peering agreements between ISPs that utilized a "best efforts" agreement. In other words and ISP would agree to process the traffic from another network with its best effort, and, when all ISPs did this, the Internet operated fairly smoothly. In a non-neutral network world, these best efforts would be replaced by service level agreements and contracts. And, perhaps more importantly, ISPs would discriminate against certain traffic. This would inevitably lead to a tiered internet where an ISPs' preferred customers/partners received priority treatment of their packets and non-preferred customers/partners traffic was moved more slowly across the network.

Traffic Shaping: Traffic shaping goes hand-in-hand with Quality of Service. It is basically defined as any action to delay traffic moving across a network. It's really a tool to manage bandwidth. As traffic enters a network, the network can stall it (add latency) to free up space on the network and improve the overall flow or stream of data. Of course this greatly impacts the owner of the stalled traffic.

Pricing Models: Having a non-neutral network could allow for a plethora of pricing models. Currently ISPs generally charge the consumer on a per month, unlimited use, bandwidth constrained rate. (i.e. I pay X dollars a month and I get a 10Mbs connection - I can use it as often or as rarely as suits me, I can use it only to send emails, or I can watch Netflix and the price remains the same). In the future ISPs have talked about two-sided pricing where both the consumer (me) and the content provider (e.g. Netflix) are both charged when I access Netflix's content. They've also discussed tiered pricing based on service levels for both sides. A Non-neutral network allows for more creative pricing and might help ISPs better manage their cost/revenue models.

In my next post we'll examine an argument against net neutrality.

Good Talk,
Tom

Net Neutrality Specifics (Part 2 in a Mutli-Part Series)

In my last post I talked at a very high level about the policy debate currently raging around net neutrality. It might help the reader to have a better understand of what exactly is being debated. And to do so, we need a better understanding of how the Internet works.

To most users of the Internet, the net is one, amazing end-to-end system that allows us to access google one minute, a florist in NYC the next, and a tailor in London the next. In truth, however, the Internet is actually composed of a number of smaller networks that work together almost seamlessly to get traffic from our computer to servers around the globe. You're probably fairly familiar with your Internet Service Provider (ISP), mine is Comcast (though on my mobile phone Verizon connects me to the Internet). When go online, what I'm doing in essence is sending a request (a packet of information) from my computers web browser onto Comcast's network where it is routed through their servers and then passed off to other ISP's servers until it ultimately reaches the servers at Google (or wherever I'm trying to surf to). The Google servers then send back packets of information across their ISP (and very likely other ISPs and backbone networks) to Comcast and ultimately back to my computer where my web browsers turns the packets into the Google homepage. (It's all entirely more complicated than that, but that's the basic idea).

So, if all of this works fine right now, why the debate about net neutrality? Well, the simple fact is that each day we're accessing more and more content online. We watch more movies, download more music, play more online games, make more VoIP calls. All of this puts a strain on the networks that make up the Internet and eventually, it means the ISPs need to install more capacity. Thinking back to the plumbing metaphor, imagine if you continually increased the amount of water flowing through your pipes. At first all is well, but eventually you'd need bigger pipes. Well, same thing on the Internet. And the ISPs are the ones who own the pipes and, in general, the ones responsible for maintaining and upgrading those pipes. What they are arguing is that if they should be able to charge customers different rates based on the type and/or amount of traffic they 1) Generate and 2) Consume. Obviously, the content owners like the current system where the pipes are "dumb" and treat all traffic the same.

We'll explore ways the ISPs can profit and manage traffic in the next post.

Good Talk,
Tom

Introduction to Net Neutrality (Part 1 of a multipart series)

Net Neutrality is one of the more hotly debated topics around federal policy (probably behind health care, taxes, and cap and trade) in the last 5 years. The basic idea of net neutrality (also, rarely, called Internet neutrality or network neutrality) is that the Internet should remain a free, unbiased medium that acts to convey messages and content from one party to another, and that it should not discriminate based on the type or source of the content (much like your plumbing does not discriminate based on the source of the water).

Proponents of net neutrality argue that Internet was born and grew up in an open culture where participants cooperated freely to ultimate benefit of all. They are argue that a tiered Internet or a discriminatory pricing regime would hamper growth and lead to profits for a few at the cost of many.

Opponents of net neutrality (most often Internet Service Providers - ISPs) argue that the only way to ensure future growth is to allow the owners of the network infrastructure (the "pipes") to properly manage their networks. This includes traffic shaping, tiered pricing, and quality of service agreements.

Overall, this is a complicated topic and one that I suspect will be debated for years to come. Over the next few posts I will explore the topic in more depth and hopefully allow my readers to come to their own conclusions.

Good Talk,
Tom