May Contain Blueberries

the sometimes journal of Jeremy Beker


I like to read. It is my chosen form of escapism. After participating in the Goodreads 2015 Reading Challenge I thought it might be fun to gather some further statistics about how my reading changed throughout the year. My goal had originally been 25 books, which I way exceeded with a count of 36. So, for 2016, I set it to 35 books. Some observations:

  • I started a new job at the beginning of March, so that kept me busy
  • May was high in page count due to rereading (I can reread faster than I can read a fresh book)
  • Tiffany and I went on vacation in September, so lots of books there

2015-Books

And here is a nice collage of the titles taken from Goodreads.

books_covers

What is your reading goal for 2016?


Welcome to another of my end of the year, oh-god-I-didn’t-blog-all-year posts. I’ve been trying to go through some of the geeky things I did this year that were a challenge for me and document them so that they might be easier for someone else.

Today’s topic: setting up a VPN server using a Cisco router for “road warrior” clients (aka, devices which could be coming from any IP address).

As should come to no surprise to anyone who knows me or who is exposed to my twitter stream I value privacy and security both from a philosophical perspective but also just as fun projects to tackle.

This project arose as an evolution of earlier VPN setups I have had in the past. When I was living in the linux world (and before I purchased my Cisco router), I used a linux server as my internet router. If you are in that situation, I highly recomend using the strongSwan VPN server. It is an enterprise grade VPN server that is also easily configured to handle small situations. I often had multiple VPN tunnels up for fixed connections that were both site to site and for roadwarriors using both pre-shared keys (PSK) and X509 certificates.

But when I upgraded our home network to using a Cisco 2811 router that I bought from a tech company liquidation auction for $11.57, running the strongSwan VPN from behind the NAT router became much more challenging. (Doable, but required some ugly source routing hacks I never liked.)

My requirements were:

  1. IPsec
  2. Capable of supporting iOS and Mac OS X clients
  3. Clients could be behind NATs (NAT-T support)
  4. Pre-shared Key support (I might do certificates again later, but as there are only 2 users of the VPN, seems like overkill.)
  5. All traffic from the clients will be routed through the VPN (no split-tunnels)
  6. Ability to to do hairpin routing. (This means that a VPN client can tunnel all of their traffic, including that destined to the rest of the internet, to the VPN server and it will be able to route it back out to the internet. This is critical for protecting your clients on untrusted networks.)

The biggest challenge that I ran into was not the lack of capabilities of the Cisco platform, but the fact that it is designed for much much larger implementations that I was going to do. In addition, most of the examples were for site-to-site configurations.

I don’t intend to go through all of the steps needed to set up a Cisco router, that is beyond the scope of this post, so I will be making the following assumptions.

  1. You are familiar working in the IOS command line interface
  2. You already have a working network
  3. It has a single external IP address (preferably a static IP)
  4. You have 1 (or more than 1) internal networks
  5. Internal hosts are NAT translated when communicating with the Internet
  6. You are familiar with setting up your ip access-list commands to protect yourself and allow the appropriate traffic in and out of your networks

OK, let’s go!

Note: For my setup, FastEthernet0/0 is my external interface (set up as ip nat outside)

User & IP address setup

Set up a user (or more than one) that will be used to access the VPN.

aaa new-model
aaa authentication login AUTH local
aaa authorization network NET local
username vpn-user password 0 VERY-STRONG-PASSWORD

And set up a pool of IP addresses that will be given out to users who connect to the VPN.

ip local pool VPN-POOL 10.42.40.0 10.42.40.255

ISAKMP Key Management

ISAKMP is the protocol that is used to do the initial negotiation and set up keys for the VPN session. First we will set up more general settings such as the fact we will be using 256 bit AES, PSKs, keepalives, etc.

crypto isakmp policy 1
 encr aes 256
 authentication pre-share
 group 2
 lifetime 3600

crypto isakmp keepalive 10

We will then set up the group which represents our clients. This includes setting paramaters for your clients, such as the pool of IP addresses they will get (from above), DNS servers, settings for perfect forward secrecy (PFS), etc.

crypto isakmp client configuration group YOUR-VPN-GROUP
 key VERY-STRONG-GROUP-KEY
 dns YOUR-DNS-SERVER-IP
 domain YOUR-DOMAIN
 pool VPN-POOL
 save-password
 pfs
 netmask 255.255.255.0

Finally, we will pull these items into a profile, vpn-profile, that can be used to set up a client.

crypto isakmp profile vpn-profile
   match identity group YOUR-VPN-GROUP
   client authentication list AUTH
   isakmp authorization list NET
   client configuration address respond
   client configuration group YOUR-VPN-GROUP
   virtual-template 1

IPSEC Paramaters

We set up the paramaters that define how IOS transforms (aka, encrypts and HMACs) the traffic on this tunnel and give it a name vpn-transform-set

crypto ipsec transform-set vpn-transform-set esp-aes esp-sha-hmac

Full IPSEC profile

Finally we link both the ISAKMP (vpn-profile) and IPSEC (vpn-transform-set) items together and give them a name ipsecprof that can be attached to a virtual interface (below).

crypto ipsec profile ipsecprof
 set transform-set vpn-transform-set
 set isakmp-profile vpn-profile

Virtual Template Interface

This caused me a bunch of confusion. Because we do not have a static site-to-site tunnel, we can’t define a tunnel interface for our VPN clients. What we do is set up a template interface that IOS will use to create the interfaces for our clients when they connect.

This needs to reference your external interface, which in my case is FastEthernet0/0.

interface Virtual-Template1 type tunnel
 ip unnumbered FastEthernet0/0
 ip nat inside
 ip virtual-reassembly
 tunnel source FastEthernet0/0
 tunnel mode ipsec ipv4
 tunnel protection ipsec profile ipsecprof

Other Notes

It is important that you have the appropriate access controls set up to restrict where in your network a VPN client can send packets. That is really beyond the scope of this post as it is very dependent on your configuration.

However, at a minimum, you need to allow the packets that arrive on your external interface for VPN clients to be handled. These packets will show up in a few forms.

You will need to add rules to handle these packets to your external, border, access lists.

ip access-list extended inBorder
 permit esp any host YOUR-EXTERNAL-IP
 permit udp any host YOUR-EXTERNAL-IP eq isakmp
 permit udp any host YOUR-EXTERNAL-IP eq non500-isakmp

Client Setup

Assuming all of this worked (and I transcribed things properly), you will be all set to configure a client. This should be a relatively easy configuration.

  • VPN Type: IKEv1, in iOS/Mac OS X this is listed as Cisco IPsec or IPsec
  • Server: Your public server IP or hostname
  • Group: YOUR-VPN-GROUP
  • Pre shared key: VERY-STRONG-GROUP-KEY
  • User: vpn-user
  • Password: VERY-STRONG-PASSWORD

Final Notes

Even though this setup uses users that are hard coded on your router, you may still want to set up a Radius server to receive accounting information so you can track connections to your VPN. It can also be expanded to do authentication and authorization for your VPN users.

I hope this was helpful to you. If you have any questions, please feel free to contact me via twitter @gothmog


As part of my transition from using a combination of Linux and FreeBSD for our home servers to being exclusively FreeBSD, I wanted to update how I did backups from my public server, bree, to the internal storage server, rivendell. Previously, I had done this with a home grown script which used rsync to transfer updates to the storage server overnight. This solution worked just fine, but was not the most efficient (see: rsync.net: ZFS Replication to the cloud is finally here-and it’s fast). While I didn’t intend to replicate to rsync.net I wanted to leverage ZFS since I am now going FreeBSD to FreeBSD.

There are numerous articles about using zxfer to perform backups but there was one big hiccup that I couldn’t get over. Quoting the man page:

zxfer -dFkPv -o copies=2,compression=lzjb -T root@192.168.123.1 -R storage backup01/pools

Having to open up the root account on my storage server, no matter how I restricted it to IP address, keys, whatever, makes me really uncomfortable and a show-stopper for me. But I thought I could do better. I have limited experience using restricted-shells to limit access to servers before and I knew that ZFS allows for delegating permissions to non-root users so I decided to give it a shot.

TL;DR: It can work.

The configuration had a few phases to it:

  1. Create a new restricted user account on my backup server and configure the commands that zxfer needs access to in the restricted shell
  2. Create the destination zfs filesystem to receive the mirror and configure the delegated permissions for the backup user
  3. Set up access to the backup server from the source server via SSH
  4. Make slight modification to zxfer to allow it to run zfs command from the PATH instead of hardcoding the path in the script

Setting up the restricted user

I created a new user on the backup system named zbackup that would be my restricted user for receiving the backups. The goal was for this user to be as limited as possible. It should only be allowed to run the commands necessary for zxfer to do its job. I landed on using rzsh as the restricted shell as it was the first one I got working with the correct environment. I set up a directory to hold binaries that the zbackup user was allowed to use.

root@storage$ mkdir /usr/local/restricted_bin
root@storage$ ln -s /sbin/zfs /usr/local/restricted_bin/zfs
root@storage$ ln -s /usr/bin/uname /usr/local/restricted_bin/uname

I then set up the .zshenv file for the zbackup user to restrict the user to that directory for executables.

export PATH=/usr/local/restricted_bin

Setting up the destination zfs filesystem

I already had a zfs filesystem that was devoted to backups so I made a new zfs filesystem underneath it to hold these new backups and be a point where I could set delegation points for permissions. Then, through trial and error, I figured out all the permissions I had to delegate to the zbackup user on the filesystem to allow zxfer to work

root@storage$ zfs create nas/backup/bree-zxfer
root@storage$ chown zbackup:zbackup /nas/backup/bree-zxfer
root@storage$ zfs allow -u zbackup atime,canmount,casesensitivity,checksum,compression,copies,create,
                          dedup,destroy,exec,filesystem_count,filesystem_limit,jailed,logbias,mount,
                          normalization,quota,readonly,receive,recordsize,redundant_metadata,
                          refquota,refreservation,reservation,setuid,sharenfs,sharesmb,snapdir,
                          snapshot_count,snapshot_limit,sync,userprop,utf8only,volmode nas/backup/bree-zxfer

(I figured out the list of actions and properties that I needed to delegate by having zxfer dump the zfs create command it was trying to run on the backup system when it failed.)

Update: I forgot 1 thing that is critical to making this work. You need to ensure that non-root users are allowed to mount filesystems. This can be accomplished by adding the following line to your /etc/sysctl.conf and rebooting:

vfs.usermount=1

Remote access to the backup server

Nothing fancy here. On my source server, I created a new SSH keypair for the root user (no problem with running the source zfs command as root). I then copied the public half of that key to the authorized_keys file of the zbackup user on the backup server. At this point, I could ssh from my source server to the backup server as the zbackup user. But when logged in to the backup server, the only commands that could be run are those in the /usr/local/restricted_bin directory (zfs and uname).

Tweak zxfer script to remove hard coded path in zfs commands

One of the limitations (intentional) of a restricted shell is that the restricted user is not allowed to specify a full pathname for any commands. Only commands located in their PATH can be run. Unfortunately, while the zbackup user has the zfs command in their PATH, it is referenced as /sbin/zfs in the zxfer script. To work around this, I modified the zxfer script to not use the path of zfs directly and assume that zfs will be in the path. This was only in 2 places of the script. If you do a quick search for /sbin/zfs you will find them.

Moment of truth!

After all this, I was now able to run any number of commands to mirror my source servers zfs filesystems (with snapshots) to my backup server.

root@source$ zxfer -dFPv -T zbackup@storage -N zroot/git nas/backup/bree-zxfer
root@source$ zxfer -dFPv -T zbackup@storage -R zroot/var nas/backup/bree-zxfer

And best of all, the storage server does not have SSH enabled for root. Success.


Arghh. I just spent 30 minutes trying to set up a locked down restricted shell on my FreeBSD box and I want to help you not do the same. My challenge was properly setting the PATH variable so that the user could not bust out and run any commands. The problem was ensuring that PATH was set for both interactive and non-interactive shells. The interactive ones were easy using either .zshrc or .bash_profile. But although the documentation for bash said it read in .bashrc for non-interactive sheets, it did not.

But, finally I found that .zshenv worked so now I can use the restricted ZSH. Yay!


I am a big fan of virtualization of operating systems. It allows for easy testing and obviously running multiple operating systems on one machine. At my company, we use VMWare ESX for infrastructure virtualization, but for my own use (professionally and personally) I really like Oracle’s VirtualBox. It is fast, reliable, and best of all, free.

As I work for a large, centrally managed company, we unsurprisingly use a standard (Windows) operating system across all of our hardware. As a right-thinking computer user, this is clearly not acceptable. While I wish I could just discard the standard company system image, I cannot do so. For my daily work, I am a Linux fan (Fedora is my distribution of choice). Virtualization allows me to merge those two worlds in a relatively harmonious way. My end goal is to run my company’s OS image inside a virtual machine on top of my preferred Linux installation. But getting there can be a challenge.

Installing an OS inside a VM is straight-forward and not worthy of a blog post but that does not help me particularly because I need to use the company-provided imaging tool that not only sets up the OS, but installs all of the corporate software and settings. This is done using a pretty slick tool (name intentionally withheld) that handles everything once the computer is registered on the back end by our IT staff.

This works great if I am installing onto the bare metal. Otherwise, there are challenges. Below is a slightly dramatized version of my install process. I don’t tell every iteration I tried but hopefully it is helpful to someone.

Once I got my new machine, I happily blew away the company OS install and got Linux working. (After making a backup, what kind of heathen do you think I am?) VirtualBox, check. Got bootable image of system imaging tool, check. Here we go.

Unknown computer

Well, I guess that makes sense. Our IT staff registered the physical machine; their backend would know nothing about a VM running on top of it. I pondered what they could use to identify the machine. Obvious choices included:

  • MAC Address
  • Hardware Serial Number
  • CPU Serial number (ick)

I decided to start with MAC address as that was the easiest to change in the VM. I wanted to make the VM use the same MAC address as the computer itself. In order to do that, however, I had to change the computer to use a different one temporarily, as having duplicate MAC addresses on the same physical network will cause problems. (I am using bridged networking.) So, I changed the MAC address of the computer using ifconfig to something new. (I just incremented the last byte by 1.) And then copied the original one into VirtualBox. This can be done under the advanced settings for the network adapter.

I rebooted into the imaging software again and, success, it started imaging the machine. I was quite pleased with myself. Sadly, it was short-lived. The imaging utility put the OS on the virtual machine but then died once it had booted into Windows and wanted to start installing further software.

In reviewing the logs, I saw the same sort of error as I had gotten originally, that the computer was not recognized by the back end system. This seemed odd as It got part of the way through the install. It appeared that at this later stage of the install the tool used a different set of information to identify the computer on which it was running.

A specific section of the log file caught my eye

-------------------------------------------------------------------------
Make: Innotek GmbH    Model: VirtualBox    Mfg:
Serial Number: 0
-  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -

I could see where this might cause a problem as these are not representative of the actual hardware. These values are returned to an operating system by examining the Desktop Management Interface (aka, DMI) of the PC. Thankfully, I researched VirtualBox and there is a way to set the values that it provides to a child OS. In order to determine what values to use, I used the linux dmidecode tool. This provided a list of the underlying values I would need:

# dmidecode 2.12
SMBIOS 2.7 present.
35 structures occupying 1856 bytes.
Table at 0x54E3F000.

Handle 0x0010, DMI type 0, 24 bytes
BIOS Information
        Vendor: Hewlett-Packard
        Version: L70 Ver. 01.10
        Release Date: 06/24/2014
        Address: 0xF0000
        Runtime Size: 64 kB
        ROM Size: 8192 kB
        [truncated]

Buried in the advanced section of the VirtualBox manual is a section entitled Configuring the BIOS DMI information which outlines the commands to set all of these values. I ended up setting more than I probably needed. (I had to wrap these commands, pull onto 1 line each if you need to run them.)

VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiBIOSVendor"
      "Hewlett-Packard"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiBIOSVersion"
      "L70 Ver. 01.10"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiBIOSReleaseDate"
      "06/24/2014"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiSystemVendor"
      "Hewlett-Packard"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiSystemProduct"
      "HP ZBook 15"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiSystemVersion"
      "A3009DD10203"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiSystemSerial"
      "XXXXXXXXXXX"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiSystemSKU"
      "F9Y23UP#ABA"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiSystemFamily"
      "103C_5336AN G=N L=BUS B=HP S=ELI"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiSystemUuid"
      "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiBoardVendor"
      "Hewlett-Packard"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiBoardProduct"
      "string:1909"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiBoardVersion"
      "KBC Version 94.51"
VBoxManage setextradata "M3065"
      "VBoxInternal/Devices/pcbios/0/Config/DmiBoardSerial"
      "XXXXXXXXXXXXX"

(I removed the actual serial number from the listing above.)

After this, I reran the imager for what turned out to be the final time and everything worked.

In the end it turned out to be a bit more work than I outlined above, but the critical steps were covered. I found it both a very frustrating and fun experience (once I got it working). A great puzzle to solve. It shows the power of virtualization software and how it is very unwise to trust what hardware tells you about itself as it is easy to manipulate.



After my former boss, Susan Evan’s great blog post this morning: In the category of not as easy as it looks: Being Boss, I ran across a Harvard Business Review interview with the amazing John Cleese. It contained a great quote I had to share:

In the book Life and How to Survive It, which I developed with Robin Skynner, we decided that the ideal leader was the one who was trying to make himself dispensable. In other words, he was helping the people around him acquire as many of his skills as possible so he could let everyone else do the work and just keep an eye on things, minimizing his job and the chaos that would come with a transfer of authority.


Since the recent ruling in Verizon v. FCC where the US Court of Appeals for the DC Circuit overturned the FCC net neutrality rules (see the EFF Net Neutrality page for background), there has been considerable discussion about the potential harms (or benefits) of this ruling. I have listened and read and I feel that the mainstream media is missing the large but subtle danger that this ruling causes and why it is critical that the FCC move to reinstate these rules.

The argument that I keep hearing about why the net neutrality rules are needed is that if internet carriers are allowed to offer differentiated internet service for a fee that it will harm consumers by raising the prices that consumers will pay. For example, ESPN might pay Verizon to allow its customers to stream its video for free but will then raise the cost to the consumer to cover this. While overturning the net neutrality rules would allow this, I don’t believe this is a threat. Both ESPN and Verizon know that consumers will prefer a lower cost solution so will not go for that. And if Verizon and ESPN can make a deal that makes it cheaper for the consumer, it might even be a benefit for the consumer. And here be dragons.

I believe that deals such as the one I outlined could be a short term benefit to consumers, but will change the way the economy of innovation works in a way that will harm consumers in the long term by shifting the cost structure of innovation in the favor of existing, large players.

The history of innovation on the Internet has been driven by the little guys. Google, the giant it is today, started as two guys in a dorm room. Facebook, another giant, started in a dorm room. In these and many other instances, the innovators had very limited resources. But, and this is the critical point, once they started providing a service on the internet, access to their new service was provided at the same level as the big players and consumers could judge the merits of say, Google vs. Altavista on the merits of the products and make a choice as to which was better.

My fear is that without net neutrality rules, the barrier to entry will be increased for new companies that can disrupt the marketplace and bring innovation to all consumers. I am not worried about the ESPNs or Verizons of the world. I am worried that it will make getting started harder for the next Google or Facebook.

So I strongly urge the FCC to reclassify internet service providers as common carriers and re-institute and strengthen the net neutrality rules to ensure that the Internet continues to innovate in a free and fair way.

More Background:


Whenever there is a group of people who intend to work together, whether a couple through marriage, friends planning an outing, citizens guiding a country, or employees running a company, decisions need to be made. Inevitably there are agreements, disagreements, and compromises. There are thousands of methods by which decisions can be reached, but the way in which a decision is reached and the motivations of the decision makers can indicate much about the health of the partnership.

The cynical question “who wears the pants in the family?” is often used to imply that there is one person in a marriage who is in charge. (We will ignore for this discussion the mysoginistic nature of the question.)

The same question can be applied to a company. When there is a conflict, large or small, between parties in the company, who wins? In examining this problem, I divide a company into two main areas: primary functions and support functions. Primary functions provide the stated, outward product or service the company offers while secondary functions are required to run a business but are not specific to any particular business sector. For example, at an automobile company, the engineering department or assembly department would be primary functions while the human resources or accounting would be support functions.

I have observed that as a company grows in size, the balance of power shifts from the primary functions to the secondary functions. In a small company, the majority of the employees are focused on the primary functions and the support functions are usually very small (often woefully small). This results in a very strong alignment between the public goal of the company and the majority of the employees of a company.

What happens as a company grows? The support functions must grow to answer the needs of a larger organization. No longer can one person handle all the accounting and human resources duties by themselves. Departments must be created and staffed.

This poses a huge risk. As with all organisms (and yes, a company is an organism made up of people, just like you and I are made up of cells), organisms desire one thing above all else; survival. The larger the organization, the larger this survival instinct becomes. And a desire to survive often leads to a high degree of risk-aversion.

Avoiding risk is a dangerous thing depending on how the organization responds. Sadly, the common way to avoid risk goes something like this:

  • A problem occurs (i.e. bug in software, lawsuit, etc.)
  • A process or procedure is created that would have caught that particular problem
  • That process is rolled out for everyone to implement

The problem with that methodology is that each process that is created takes time away from the core mission of a company. As an example, let us assume that FooBar Inc. makes widgets. Each widget takes 10 hours to complete, but 1% of the time a widget jams in the machines and causes 20 hours of downtime. This sounds horrible! So FooBar Inc. implements a new process that changes the manufacturing process by introducing a QA step on every widget. Sounds like a great idea. However, it adds 1 hour to each widget manufacture.

  • Old System: 100 widgets takes 1000 hours + 20 hours of downtime
  • New System: 100 widgets takes 1100 hours

So, in this scenario, a seemingly good idea (extra QA) actually makes the situation worse for making widgets. And this type of decision is made every single day in companies. A singular bad thing happened resulting in a policy that is applied to all scenarios. By not accepting that some risk is unavoidable or that the cost of avoiding some risks is greater than the risk themselves, companies fall into a spiral of creating more and more time consuming processes which eventually stifle their ability to achieve their stated goals.

At some point in the life of most every company there comes a tipping point. A point where the support organizations that oversee these policies and procedures take over. It is hard to see, but can be answered by our original question.

Who wears the pants in your company? When there is a conflict between a support function and a primary function and it is presented to your senior leadership, which way do they decide?

A lot can be judged by that decision.


monsters-small

I’ve been thinking recently about the human characteristics that are critical to match when looking at a workplace: what works, what doesn’t, what causes stress. While there are certainly characteristics about workplaces that are truly unacceptable (threats, harassment, etc.) there is still a broad range of workplace environments that could be considered normal: large company, small company, strict hierarchical, flat management, etc.. How does one look at oneself and determine if you will fit in well with the company philosophy?

In third and fourth grade, I was introduced to Dungeons & Dragons, the classic role playing, swords and socery game that was the passion of so many children and supposed to bring the satanic downfall of society. My friend Josh and I played somewhat infrequently in art class for a year or so. I enjoyed the game, but never became a die-hard player. D&D passed out of my life for a while until I got to college and my friends included those who were still avid players. I still found it hard to play regularly, but I always enjoyed the process of the game and the details that went into creating a character even if actually role-playing that character posed a challenge to me. Physical characteristics and abilities were well defined and quantified through the use of dice roles and formed the basis of how your character operated in the world. More abstract philosophical predisposition was wrapped up in a stat called alignment.

Alignment is a categorization of the ethical (Law/Chaos axis) and moral (Good/Evil axis) perspective of people, creatures and societies.

In D&D, mixing characters of different alignments can have unpredictable results. It should be obvious that mixing a character with a Good alignment with one of and Evil alignment is bound to cause problems, but the challenges of Lawful/Chaotic mixtures can be more subtle but critical to this discussion. Since I believe that we all strive to the Good end of the spectrum, we will restrict ourselves to how the Lawful/Chaotic axis effects our lives.

At its most basic level a person characterized as Lawful Good is one who believes in following rules and respecting authority as the source of positive action in the world. While a Chaotic Good person has a strong inner moral compass to do good as they see it, without regard for established, reognized authorities. For example, the stereotypical medieval knight would be Lawful Good as he follows a strict moral code shared among all knights while Robin Hood did good deeds based on his own internal moral compass.

How does this effect the workplace? Companies have an alignment as well. An organization such as IBM could be seen to be a Lawful organization. It has a traditional view of management hierarchy with well established rules and codes of conduct. Whereas a company such as Google might be considered more Chaotic as it has a more free-wheeling style and supports some level of autonomy from its employees. While all of these characteristics are a sliding scale from Lawful to Chaotic, having a general match is critical.

Picture a Lawful Good employee at a Chaotic Good company? The company will expect the employee to have an internal sense of what they want to acomplish while the employee will be grasping for a level of structure that probably does not exist.

Chaotic Good employee and Lawful Good company? The situation is no better. The employee will be constantly fighting against a system they don’t see the need for while trying to reach the same goals.

Real world examples are of course more nuanced than this, but having a good understaning of the workplace environment you will mess with the best is critical as you look for work in any company. Learning these traits about a company is critical to your happiness and success at a company and you should think about them.

And if you are unsure, take an alignment test, you may learn something!