May Contain Blueberries

the sometimes journal of Jeremy Beker


The recent redesign of this blog ended up being a lot more than just a visual redesign. In many ways it has come full circle. In the earliest incarnation of May Contain Blueberries, I hand built HTML pages for each entry. As that became more annoying I switched over to Movable Type in all its Perl glory. But as I was doing my own development in PHP, I eventually transitioned to Wordpress and while that has served me well (and the other users on my server), it was time to move on.

The biggest reason was maintenance driven by security. For a blog that I don’t write on very often (sadly once a year seems about the norm), I was having to apply security patches at least monthly, if not more often. The benefits of a fully dynamic blog platform was its downfall. Wordpress is such a large piece of software it seems that no end of security issues are being discovered. And as a solo system administrator I need to minimize my attack surface as much as I can.

I took two paths. There were quite a few blogs that are no longer being used on my server, so I took static “snapshots” of them. While this did break inbound links, they weren’t highly trafficed so I was ok with that (and search engines will figure it out).

For my blog, I decided to move to a static site generator. This has the benefits that I wanted such as templates, auto-generated index pages and such, but ends up serving plain HTML eliminating that security issue. I don’t need onsite interactivity so why pay the costs for it. I chose Jekyll. It allows me to write posts in Markdown and automatically build the site and push it to my server in one step.

On to the new thing! Who knows, I might write more (probably not).


There is much discussion on the internet about the wisdom of running one’s own mail server and it includes much valid criticism. There are significant security concerns beyond the normal amount of maintenance of any system. For reasons varied and irrelevant here, I have chosen to do so for over 15 years.

The aspect of doing so that is often not discussed when compared to using commercial services such as Gmail is that one has to deal with spam entirely on your own. This is difficult for (at least) 2 reasons. The obvious being that it is a hard problem to deal with, the not so obvious is I only have 2 users of my mail server, so I don’t have the ability to allow my users provide input in identifying spam.

For many years I have relied on tools such as Spamassassin to try to identify spam once it has reached my mail server. I also make use of various blacklists to identify IP addresses that are known to deliver spam. I use Mailspike and Spamhaus.

This is the situation I was in up until this past weekend. Hundreds of emails a day would slip past the blacklists. Spamassassin is very good but it was still allowing around 5% of those messages to reach my inbox. And the problem seemed to be getting worse.

In the past, I had used greylisting but I eventually stopped given the main side effect of that system; messages from new senders, legitimate or otherwise, would get delayed by at least 5 minutes. This is fine for most emails, but for things like password resets or confirmation codes, was just too much of an inconvenience.

What I wanted was a system where messages that are unlikely to be spam make it right through and all others get greylisted.

My boss mentioned a solution that he implemented once that decided to greylist based on the spam score of the inbound emails. This allowed him to only greylist things that looked like they might be spam. Unfortunately, looking at the emails that were slipping through my existing system, they generally had very low scores (spammers test against Spamasassin).

So, I pursued a different solution. Several services provide not only blacklists but also whitelists that give a reputation score for various IP addresses around the internet. I chose to use the whitelist from Mailspike and DNSWL

I implemented a hierarchy:

  • Accept messages from hosts we have manually whitelisted
  • Reject messages from hosts on one of the watched blacklists
  • Accept messages from hosts with a high reputation score
  • Greylist everything else

When I enabled this ruleset, I thought I had broken things. I stopped getting any email coming into my system. It turns out that I had just stopped all the spam. It was amazing.

In the two days I have been running this system, every legitimate email has made it to my inbox. I have seen 10-15 messages get through the initial screens and been correctly identified as spam by Spamassassin. (In the early stages I had a few messages make it to my inbox but I realized that was because I trusted the whitelists more than the blacklists. I.e. hosts were listed as both trustworthy and sending spam. As the Blacklists seem to react faster, I decided to switch the order as shown above.)

You can look at the graphs to see when I turned the system on:

This first graph shows the number of messages that were accepted by my server (per second). You can see that the number dropped considerably when I turned on my hybrid solution. Since messages were getting rejected before they were accepted by the system, there are less messages for Spamassassin to investigate.

This can be seen here, where the number of messages identified as spam also went down because they were stopped before Spamassassin even needed to look at them.

If you run postfix and would like to implement a similar system, here is the relevant configuration section from my main.cf.

smtpd_recipient_restrictions = permit_mynetworks,
    permit_sasl_authenticated,
    reject_invalid_hostname,
    reject_non_fqdn_sender,
    reject_non_fqdn_recipient,
    reject_unknown_sender_domain,
    reject_unknown_recipient_domain,
    reject_unauth_pipelining,
    reject_unauth_destination,
    reject_non_fqdn_sender,
    check_client_access hash:/usr/local/etc/postfix/rbl_override,
    reject_rbl_client bl.mailspike.net,
    reject_rbl_client zen.spamhaus.org,
    permit_dnswl_client rep.mailspike.net=127.0.0.[ 18..20 ],
    permit_dnswl_client list.dnswl.org=127.0.[ 0..255 ].[ 2..3 ],
    check_policy_service unix:/var/run/postgrey.sock,
    reject_unverified_recipient,
    permit

So, overall this has been a resounding success. I hope this helps some of you out there with the same challenges.


I like to read. It is my chosen form of escapism. After participating in the Goodreads 2015 Reading Challenge I thought it might be fun to gather some further statistics about how my reading changed throughout the year. My goal had originally been 25 books, which I way exceeded with a count of 36. So, for 2016, I set it to 35 books. Some observations:

  • I started a new job at the beginning of March, so that kept me busy
  • May was high in page count due to rereading (I can reread faster than I can read a fresh book)
  • Tiffany and I went on vacation in September, so lots of books there

2015-Books

And here is a nice collage of the titles taken from Goodreads.

books_covers

What is your reading goal for 2016?


Welcome to another of my end of the year, oh-god-I-didn’t-blog-all-year posts. I’ve been trying to go through some of the geeky things I did this year that were a challenge for me and document them so that they might be easier for someone else.

Today’s topic: setting up a VPN server using a Cisco router for “road warrior” clients (aka, devices which could be coming from any IP address).

As should come to no surprise to anyone who knows me or who is exposed to my twitter stream I value privacy and security both from a philosophical perspective but also just as fun projects to tackle.

This project arose as an evolution of earlier VPN setups I have had in the past. When I was living in the linux world (and before I purchased my Cisco router), I used a linux server as my internet router. If you are in that situation, I highly recomend using the strongSwan VPN server. It is an enterprise grade VPN server that is also easily configured to handle small situations. I often had multiple VPN tunnels up for fixed connections that were both site to site and for roadwarriors using both pre-shared keys (PSK) and X509 certificates.

But when I upgraded our home network to using a Cisco 2811 router that I bought from a tech company liquidation auction for $11.57, running the strongSwan VPN from behind the NAT router became much more challenging. (Doable, but required some ugly source routing hacks I never liked.)

My requirements were:

  1. IPsec
  2. Capable of supporting iOS and Mac OS X clients
  3. Clients could be behind NATs (NAT-T support)
  4. Pre-shared Key support (I might do certificates again later, but as there are only 2 users of the VPN, seems like overkill.)
  5. All traffic from the clients will be routed through the VPN (no split-tunnels)
  6. Ability to to do hairpin routing. (This means that a VPN client can tunnel all of their traffic, including that destined to the rest of the internet, to the VPN server and it will be able to route it back out to the internet. This is critical for protecting your clients on untrusted networks.)

The biggest challenge that I ran into was not the lack of capabilities of the Cisco platform, but the fact that it is designed for much much larger implementations that I was going to do. In addition, most of the examples were for site-to-site configurations.

I don’t intend to go through all of the steps needed to set up a Cisco router, that is beyond the scope of this post, so I will be making the following assumptions.

  1. You are familiar working in the IOS command line interface
  2. You already have a working network
  3. It has a single external IP address (preferably a static IP)
  4. You have 1 (or more than 1) internal networks
  5. Internal hosts are NAT translated when communicating with the Internet
  6. You are familiar with setting up your ip access-list commands to protect yourself and allow the appropriate traffic in and out of your networks

OK, let’s go!

Note: For my setup, FastEthernet0/0 is my external interface (set up as ip nat outside)

User & IP address setup

Set up a user (or more than one) that will be used to access the VPN.

aaa new-model
aaa authentication login AUTH local
aaa authorization network NET local
username vpn-user password 0 VERY-STRONG-PASSWORD

And set up a pool of IP addresses that will be given out to users who connect to the VPN.

ip local pool VPN-POOL 10.42.40.0 10.42.40.255

ISAKMP Key Management

ISAKMP is the protocol that is used to do the initial negotiation and set up keys for the VPN session. First we will set up more general settings such as the fact we will be using 256 bit AES, PSKs, keepalives, etc.

crypto isakmp policy 1
 encr aes 256
 authentication pre-share
 group 2
 lifetime 3600

crypto isakmp keepalive 10

We will then set up the group which represents our clients. This includes setting paramaters for your clients, such as the pool of IP addresses they will get (from above), DNS servers, settings for perfect forward secrecy (PFS), etc.

crypto isakmp client configuration group YOUR-VPN-GROUP
 key VERY-STRONG-GROUP-KEY
 dns YOUR-DNS-SERVER-IP
 domain YOUR-DOMAIN
 pool VPN-POOL
 save-password
 pfs
 netmask 255.255.255.0

Finally, we will pull these items into a profile, vpn-profile, that can be used to set up a client.

crypto isakmp profile vpn-profile
   match identity group YOUR-VPN-GROUP
   client authentication list AUTH
   isakmp authorization list NET
   client configuration address respond
   client configuration group YOUR-VPN-GROUP
   virtual-template 1

IPSEC Paramaters

We set up the paramaters that define how IOS transforms (aka, encrypts and HMACs) the traffic on this tunnel and give it a name vpn-transform-set

crypto ipsec transform-set vpn-transform-set esp-aes esp-sha-hmac

Full IPSEC profile

Finally we link both the ISAKMP (vpn-profile) and IPSEC (vpn-transform-set) items together and give them a name ipsecprof that can be attached to a virtual interface (below).

crypto ipsec profile ipsecprof
 set transform-set vpn-transform-set
 set isakmp-profile vpn-profile

Virtual Template Interface

This caused me a bunch of confusion. Because we do not have a static site-to-site tunnel, we can’t define a tunnel interface for our VPN clients. What we do is set up a template interface that IOS will use to create the interfaces for our clients when they connect.

This needs to reference your external interface, which in my case is FastEthernet0/0.

interface Virtual-Template1 type tunnel
 ip unnumbered FastEthernet0/0
 ip nat inside
 ip virtual-reassembly
 tunnel source FastEthernet0/0
 tunnel mode ipsec ipv4
 tunnel protection ipsec profile ipsecprof

Other Notes

It is important that you have the appropriate access controls set up to restrict where in your network a VPN client can send packets. That is really beyond the scope of this post as it is very dependent on your configuration.

However, at a minimum, you need to allow the packets that arrive on your external interface for VPN clients to be handled. These packets will show up in a few forms.

You will need to add rules to handle these packets to your external, border, access lists.

ip access-list extended inBorder
 permit esp any host YOUR-EXTERNAL-IP
 permit udp any host YOUR-EXTERNAL-IP eq isakmp
 permit udp any host YOUR-EXTERNAL-IP eq non500-isakmp

Client Setup

Assuming all of this worked (and I transcribed things properly), you will be all set to configure a client. This should be a relatively easy configuration.

  • VPN Type: IKEv1, in iOS/Mac OS X this is listed as Cisco IPsec or IPsec
  • Server: Your public server IP or hostname
  • Group: YOUR-VPN-GROUP
  • Pre shared key: VERY-STRONG-GROUP-KEY
  • User: vpn-user
  • Password: VERY-STRONG-PASSWORD

Final Notes

Even though this setup uses users that are hard coded on your router, you may still want to set up a Radius server to receive accounting information so you can track connections to your VPN. It can also be expanded to do authentication and authorization for your VPN users.

I hope this was helpful to you. If you have any questions, please feel free to contact me via twitter @gothmog


As part of my transition from using a combination of Linux and FreeBSD for our home servers to being exclusively FreeBSD, I wanted to update how I did backups from my public server, bree, to the internal storage server, rivendell. Previously, I had done this with a home grown script which used rsync to transfer updates to the storage server overnight. This solution worked just fine, but was not the most efficient (see: rsync.net: ZFS Replication to the cloud is finally here-and it’s fast). While I didn’t intend to replicate to rsync.net I wanted to leverage ZFS since I am now going FreeBSD to FreeBSD.

There are numerous articles about using zxfer to perform backups but there was one big hiccup that I couldn’t get over. Quoting the man page:

zxfer -dFkPv -o copies=2,compression=lzjb -T root@192.168.123.1 -R storage backup01/pools

Having to open up the root account on my storage server, no matter how I restricted it to IP address, keys, whatever, makes me really uncomfortable and a show-stopper for me. But I thought I could do better. I have limited experience using restricted-shells to limit access to servers before and I knew that ZFS allows for delegating permissions to non-root users so I decided to give it a shot.

TL;DR: It can work.

The configuration had a few phases to it:

  1. Create a new restricted user account on my backup server and configure the commands that zxfer needs access to in the restricted shell
  2. Create the destination zfs filesystem to receive the mirror and configure the delegated permissions for the backup user
  3. Set up access to the backup server from the source server via SSH
  4. Make slight modification to zxfer to allow it to run zfs command from the PATH instead of hardcoding the path in the script

Setting up the restricted user

I created a new user on the backup system named zbackup that would be my restricted user for receiving the backups. The goal was for this user to be as limited as possible. It should only be allowed to run the commands necessary for zxfer to do its job. I landed on using rzsh as the restricted shell as it was the first one I got working with the correct environment. I set up a directory to hold binaries that the zbackup user was allowed to use.

root@storage$ mkdir /usr/local/restricted_bin
root@storage$ ln -s /sbin/zfs /usr/local/restricted_bin/zfs
root@storage$ ln -s /usr/bin/uname /usr/local/restricted_bin/uname

I then set up the .zshenv file for the zbackup user to restrict the user to that directory for executables.

export PATH=/usr/local/restricted_bin

Setting up the destination zfs filesystem

I already had a zfs filesystem that was devoted to backups so I made a new zfs filesystem underneath it to hold these new backups and be a point where I could set delegation points for permissions. Then, through trial and error, I figured out all the permissions I had to delegate to the zbackup user on the filesystem to allow zxfer to work

root@storage$ zfs create nas/backup/bree-zxfer
root@storage$ chown zbackup:zbackup /nas/backup/bree-zxfer
root@storage$ zfs allow -u zbackup atime,canmount,casesensitivity,checksum,compression,copies,create,
                          dedup,destroy,exec,filesystem_count,filesystem_limit,jailed,logbias,mount,
                          normalization,quota,readonly,receive,recordsize,redundant_metadata,
                          refquota,refreservation,reservation,setuid,sharenfs,sharesmb,snapdir,
                          snapshot_count,snapshot_limit,sync,userprop,utf8only,volmode nas/backup/bree-zxfer

(I figured out the list of actions and properties that I needed to delegate by having zxfer dump the zfs create command it was trying to run on the backup system when it failed.)

Update: I forgot 1 thing that is critical to making this work. You need to ensure that non-root users are allowed to mount filesystems. This can be accomplished by adding the following line to your /etc/sysctl.conf and rebooting:

vfs.usermount=1

Remote access to the backup server

Nothing fancy here. On my source server, I created a new SSH keypair for the root user (no problem with running the source zfs command as root). I then copied the public half of that key to the authorized_keys file of the zbackup user on the backup server. At this point, I could ssh from my source server to the backup server as the zbackup user. But when logged in to the backup server, the only commands that could be run are those in the /usr/local/restricted_bin directory (zfs and uname).

Tweak zxfer script to remove hard coded path in zfs commands

One of the limitations (intentional) of a restricted shell is that the restricted user is not allowed to specify a full pathname for any commands. Only commands located in their PATH can be run. Unfortunately, while the zbackup user has the zfs command in their PATH, it is referenced as /sbin/zfs in the zxfer script. To work around this, I modified the zxfer script to not use the path of zfs directly and assume that zfs will be in the path. This was only in 2 places of the script. If you do a quick search for /sbin/zfs you will find them.

Moment of truth!

After all this, I was now able to run any number of commands to mirror my source servers zfs filesystems (with snapshots) to my backup server.

root@source$ zxfer -dFPv -T zbackup@storage -N zroot/git nas/backup/bree-zxfer
root@source$ zxfer -dFPv -T zbackup@storage -R zroot/var nas/backup/bree-zxfer

And best of all, the storage server does not have SSH enabled for root. Success.


Arghh. I just spent 30 minutes trying to set up a locked down restricted shell on my FreeBSD box and I want to help you not do the same. My challenge was properly setting the PATH variable so that the user could not bust out and run any commands. The problem was ensuring that PATH was set for both interactive and non-interactive shells. The interactive ones were easy using either .zshrc or .bash_profile. But although the documentation for bash said it read in .bashrc for non-interactive sheets, it did not.

But, finally I found that .zshenv worked so now I can use the restricted ZSH. Yay!


I am a big fan of virtualization of operating systems. It allows for easy testing and obviously running multiple operating systems on one machine. At my company, we use VMWare ESX for infrastructure virtualization, but for my own use (professionally and personally) I really like Oracle’s VirtualBox. It is fast, reliable, and best of all, free.

As I work for a large, centrally managed company, we unsurprisingly use a standard (Windows) operating system across all of our hardware. As a right-thinking computer user, this is clearly not acceptable. While I wish I could just discard the standard company system image, I cannot do so. For my daily work, I am a Linux fan (Fedora is my distribution of choice). Virtualization allows me to merge those two worlds in a relatively harmonious way. My end goal is to run my company’s OS image inside a virtual machine on top of my preferred Linux installation. But getting there can be a challenge.

Installing an OS inside a VM is straight-forward and not worthy of a blog post but that does not help me particularly because I need to use the company-provided imaging tool that not only sets up the OS, but installs all of the corporate software and settings. This is done using a pretty slick tool (name intentionally withheld) that handles everything once the computer is registered on the back end by our IT staff.

This works great if I am installing onto the bare metal. Otherwise, there are challenges. Below is a slightly dramatized version of my install process. I don’t tell every iteration I tried but hopefully it is helpful to someone.

Once I got my new machine, I happily blew away the company OS install and got Linux working. (After making a backup, what kind of heathen do you think I am?) VirtualBox, check. Got bootable image of system imaging tool, check. Here we go.

Unknown computer

Well, I guess that makes sense. Our IT staff registered the physical machine; their backend would know nothing about a VM running on top of it. I pondered what they could use to identify the machine. Obvious choices included:

  • MAC Address
  • Hardware Serial Number
  • CPU Serial number (ick)

I decided to start with MAC address as that was the easiest to change in the VM. I wanted to make the VM use the same MAC address as the computer itself. In order to do that, however, I had to change the computer to use a different one temporarily, as having duplicate MAC addresses on the same physical network will cause problems. (I am using bridged networking.) So, I changed the MAC address of the computer using ifconfig to something new. (I just incremented the last byte by 1.) And then copied the original one into VirtualBox. This can be done under the advanced settings for the network adapter.

I rebooted into the imaging software again and, success, it started imaging the machine. I was quite pleased with myself. Sadly, it was short-lived. The imaging utility put the OS on the virtual machine but then died once it had booted into Windows and wanted to start installing further software.

In reviewing the logs, I saw the same sort of error as I had gotten originally, that the computer was not recognized by the back end system. This seemed odd as It got part of the way through the install. It appeared that at this later stage of the install the tool used a different set of information to identify the computer on which it was running.

A specific section of the log file caught my eye

-------------------------------------------------------------------------
Make: Innotek GmbH    Model: VirtualBox    Mfg:
Serial Number: 0
-  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -

I could see where this might cause a problem as these are not representative of the actual hardware. These values are returned to an operating system by examining the Desktop Management Interface (aka, DMI) of the PC. Thankfully, I researched VirtualBox and there is a way to set the values that it provides to a child OS. In order to determine what values to use, I used the linux dmidecode tool. This provided a list of the underlying values I would need:

# dmidecode 2.12
SMBIOS 2.7 present.
35 structures occupying 1856 bytes.
Table at 0x54E3F000.

Handle 0x0010, DMI type 0, 24 bytes
BIOS Information
        Vendor: Hewlett-Packard
        Version: L70 Ver. 01.10
        Release Date: 06/24/2014
        Address: 0xF0000
        Runtime Size: 64 kB
        ROM Size: 8192 kB
        [truncated]

Buried in the advanced section of the VirtualBox manual is a section entitled Configuring the BIOS DMI information which outlines the commands to set all of these values. I ended up setting more than I probably needed. (I had to wrap these commands, pull onto 1 line each if you need to run them.)

VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiBIOSVendor"
      "Hewlett-Packard"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiBIOSVersion"
      "L70 Ver. 01.10"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiBIOSReleaseDate"
      "06/24/2014"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiSystemVendor"
      "Hewlett-Packard"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiSystemProduct"
      "HP ZBook 15"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiSystemVersion"
      "A3009DD10203"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiSystemSerial"
      "XXXXXXXXXXX"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiSystemSKU"
      "F9Y23UP#ABA"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiSystemFamily"
      "103C_5336AN G=N L=BUS B=HP S=ELI"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiSystemUuid"
      "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiBoardVendor"
      "Hewlett-Packard"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiBoardProduct"
      "string:1909"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiBoardVersion"
      "KBC Version 94.51"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiBoardSerial"
      "XXXXXXXXXXXXX"

(I removed the actual serial number from the listing above.)

After this, I reran the imager for what turned out to be the final time and everything worked.

In the end it turned out to be a bit more work than I outlined above, but the critical steps were covered. I found it both a very frustrating and fun experience (once I got it working). A great puzzle to solve. It shows the power of virtualization software and how it is very unwise to trust what hardware tells you about itself as it is easy to manipulate.



After my former boss, Susan Evan’s great blog post this morning: In the category of not as easy as it looks: Being Boss, I ran across a Harvard Business Review interview with the amazing John Cleese. It contained a great quote I had to share:

In the book Life and How to Survive It, which I developed with Robin Skynner, we decided that the ideal leader was the one who was trying to make himself dispensable. In other words, he was helping the people around him acquire as many of his skills as possible so he could let everyone else do the work and just keep an eye on things, minimizing his job and the chaos that would come with a transfer of authority.


Since the recent ruling in Verizon v. FCC where the US Court of Appeals for the DC Circuit overturned the FCC net neutrality rules (see the EFF Net Neutrality page for background), there has been considerable discussion about the potential harms (or benefits) of this ruling. I have listened and read and I feel that the mainstream media is missing the large but subtle danger that this ruling causes and why it is critical that the FCC move to reinstate these rules.

The argument that I keep hearing about why the net neutrality rules are needed is that if internet carriers are allowed to offer differentiated internet service for a fee that it will harm consumers by raising the prices that consumers will pay. For example, ESPN might pay Verizon to allow its customers to stream its video for free but will then raise the cost to the consumer to cover this. While overturning the net neutrality rules would allow this, I don’t believe this is a threat. Both ESPN and Verizon know that consumers will prefer a lower cost solution so will not go for that. And if Verizon and ESPN can make a deal that makes it cheaper for the consumer, it might even be a benefit for the consumer. And here be dragons.

I believe that deals such as the one I outlined could be a short term benefit to consumers, but will change the way the economy of innovation works in a way that will harm consumers in the long term by shifting the cost structure of innovation in the favor of existing, large players.

The history of innovation on the Internet has been driven by the little guys. Google, the giant it is today, started as two guys in a dorm room. Facebook, another giant, started in a dorm room. In these and many other instances, the innovators had very limited resources. But, and this is the critical point, once they started providing a service on the internet, access to their new service was provided at the same level as the big players and consumers could judge the merits of say, Google vs. Altavista on the merits of the products and make a choice as to which was better.

My fear is that without net neutrality rules, the barrier to entry will be increased for new companies that can disrupt the marketplace and bring innovation to all consumers. I am not worried about the ESPNs or Verizons of the world. I am worried that it will make getting started harder for the next Google or Facebook.

So I strongly urge the FCC to reclassify internet service providers as common carriers and re-institute and strengthen the net neutrality rules to ensure that the Internet continues to innovate in a free and fair way.

More Background: