May Contain Blueberries

the sometimes journal of Jeremy Beker


![](/images/5401996406_2a3f26ea9d.jpg)

I am officially done with Winter. Thanks for the pretty snow, but see that door over there? Please use it. Thanks!

I hadn’t realized just how much I needed that smell of fresh air, the colors, the sight of growing things until this past weekend. Tiffany and I went up to the Lewis Ginter Botanical Gardens in Richmond to take some pictures. Walking the grounds was nice, but entering the greenhouse was amazing! There was one wing of the greenhouse that was filled with spring flowers and the smell was just divine. I contemplated securing a cot and camping out there for the next few weeks. If they had wifi, I would be totally set.

Hopefully it will be just a few weeks and the flowers will start showing their faces outside. Until then, we may need to make a few trips up to the gardens to bolster my spirits.

All Pictures - Lewis Ginter Winter 2011

As a side note, we became members this year and it is a really great deal! It was $60 for an Out of Town Dual membership that gets you unlimited entry into not only the Lewis Ginter gardens but hundreds of others around the country including the Norfolk Botanical Gardens. You should do it!


In the same theme as yesterday’s post, I thought I would expand by adding my suggestions for those people who are in the job market looking for work based on the hundreds of people I have interviewed over the years. I won’t go over the standard type of stuff like researching the company you are interviewing with, being on time, bathing, etc. which you can find many other places.

Know how to shake hands

I always feel a little shallow talking about this topic, but it really is something that makes an impression. The first impression I get from a candidate that has what my father referred to as the “dead fish handshake” is really hard to overcome during an interview. It immediately puts me in the mind of a person who is not self-confident. And when I interview I am looking for personality far more than I am specific “resume” skills.

Look comfortable in your clothes

This is an area where I will amend the traditional wisdom on what to wear to an interview. The overly simplistic rule is “dress up for an interview, preferably a suit (or equivalent).” This is silly as it is based upon a standard 60’s office culture. Modern wisdom has shifted that you should wear one notch higher on the formality for the office you are going into. This is a good general rule, but I will add a corollary to that. If you aren’t comfortable in your clothes, wear something else. Even if the “rules” would state that you should wear a suit to an interview, if you never wear suits, you have a badly fitting one you got for a wedding 5 years ago, and you will spend the entire interview fidgeting with it, don’t wear the damn suit. Find something close that you will look comfortable in.

As an interviewer, I will happily overlook a slightly less formal dressing style if you seem comfortable and relaxed. While if you look like you would like to claw your tie off and are uncomfortable, I will notice and it will make me wonder if you always act like that.

Keep Talking

Don’t give one word answers. If you are asked a question, obviously you need to answer the question. But look for the openings that will allow you to tell a good (appropriate) story that exemplifies why you are the right candidate. Make me want to hear you keep talking. Remember that beyond the work that a company needs you to do, they also have to spend 8 hours a day with you. If you appear boring and uninteresting, you are at a disadvantage.

Know what you want to do

Have an idea what you would like to do in a perfect world. And be willing to talk about it even if the current job you are interviewing for isn’t exactly that thing. Displaying a sense of vision and imagination is what shows you can think for yourself. If, when asked that question, you say that you have always wanted to do is exactly what my job description says, I will not be impressed. I will assume you are either very boring or trying to suck up.

In the end, the interview is the potential start to a long term relationship. You are trying to impress the interviewers with you as a person. Keep that in mind and you should do fine.


My good friend and former super amazing boss at The College of William and Mary, Susan Evans, has been writing a series of blog posts lately that I have really been enjoying. I’ve enjoyed them so much, they have spurred me to write today. Her entry from today, Hiring? Listen and spend an hour, is particularly relevant lately as I have been working to add 7-8 people to my teams in the last few months and I spent yesterday talking with candidates at the William and Mary Career Fair. Susan makes many wonderful points in her article and I highly recommend you read hers first; I’ll wait.

I can’t emphasis the importance of quality hiring enough; while hard when you feel the pressure to get a new employee RIGHT NOW to try to avoid whatever pain you may be feeling in your organization, hiring the wrong person is always far worse. You must remember that the interview process provides your only protection against getting a bad candidate in to an organization; even in the best of situations you only have a few hours to determine if this person is one you will count on for thousands and thousands of hours in the future. The cost of hiring the wrong fit is too high and no matter your company, getting rid of someone you have hired who is the wrong fit is far harder (emotionally and practically) than just not hiring them in the first place.

Trust your team

Never be the only person who contributes to the hiring decision. For any job I have, no matter how junior it is, I always have many people inside and outside my team do their own interviews with the candidate. This can range from 3 people beyond me to 5 or 6. My interview process is long and I get comments from candidates when I wrap up with them that it is more thorough than they have seen at other companies.

The hardest part is that everyone must agree that the candidate should be hired. If anyone has any doubts, the answer is no. It does not matter how much everyone else loves the candidate, everyone has veto power. This is critical to ensuring you only get a new employee that will succeed in your team. (And don’t overlook the ancillary benefit that it shows your current staff that you trust them with the responsibility to shape the team.)

Meet face to face

As hiring managers, everyone makes mistakes from time to time. What I have learned from my mistakes with hiring is that the people with whom I have been later unhappy with are the ones I hired based on the recommendations of others without meeting personally or that I only did a phone interview with. This was the trap of being “too busy” and not seeing hiring as the most critical part of my job. Even in situations where someone was being hired by one of my managers and won’t report to me directly, I now insist that I get time face to face with them. It is the only way to be sure.

The person is what matters

Here I just want to reiterate a point Susan makes in her closing: “the best hiring decisions are based on the type of person you hire and not the skills they have.” I can’t stress this enough. While a resume is important, the courses they have taken in college are important, in the end, none of that matters. I am hiring a person not a list of skills and that is what you must focus upon.


I think it comes as no surprise that I would love some day to start up a small high tech company. When people talk about the required items to start a technology company, there seem to be three main components that are required; money, people, and an idea.

For many years I worried about the first two of these; people and money. But, as I have gotten older (and perhaps wiser) I have been able to meet the right people who in tern have connections to money. But, the tricky part is the elusive idea. The Internet and the advance of rapid implementation has made it so easy to implement an idea and share it with the world that it seems whenever I and my coconspirators have an idea, someone has already done it.



A good architecture is consistent in the sense that, given partial knowledge of the system, one can predict the remainder. - Fred Brooks, Design of Design, pg. 143

I’ve been voraciously reading Design of Design, a new book of essays from Fred Brooks, author of the software engineering classic, The Mythical Man Month. I’ve been enjoying it immensely and would recommend it to all who deal in software. Combine it with a glass of wine, a comfy rocking chair, and relaxing music and life is good.

But, the quote above got me thinking about fractals; a topic I was quite enthralled with in high school. Yes, fractals, bear with me.

When I read that sentence it felt like something I knew was being put into words so simple I could never have achieved. But, if you work in software you know that when looking at a well designed application or API, there are similarities that can be recognized immediately. You become familiar with one portion of the code and it gives you insight into how the whole application works. This is the essence of what becomes described as a beautiful or even elegant design.

It is something I have strived for as I design systems. But this brings me to the fractal topic. One of the key characteristics of fractals is the concept of self-similarity and independence of scale. Basically that as you zoom in or out of a fractal object, it looks the same. The way this is measured is by defining the dimensionality of an object. I won’t go into lots if detail, but the basic idea is that you can have fractional dimensions. A straight line is 1 dimensional and a square is 2 dimensional. But what about the line representing the coastline of a country? It isn’t a straight line, but it isn’t a solid either. And ad you look closer and closer, from orbit all the way down to grains of sand, it basically has the same shape. This is the essence of self similarity Depending on the twists an turns and nooks and crannies, and, yes, the self similarity, it will have a fractional dimension somewhere between 1 and 2.

So how do I get from software to fractals? well it seems to me that if one could represent software, most likely APIs and functions and measure their dimensionality, one could use that as a measure of quality if design. The higher the self similarity of the API, the better a design it might be.

I haven’t looked if this has been thought of before. But who knows, maybe there is a PhD in this idea.

Update: Turns out I was right, it is a PhD level topic: A Fractal Software Complexity Metric Analyser


Toby,

![](/images/830876604_99bb32a04b_m.jpg)

How did you know every time I drove up that I was coming to visit and make it down to the window to greet me? Was it the simple sound of my car or maybe some magical kitty sense I will never understand? But you always made me smile, postponing my entry to play with you at the window; to see but not hear your little meows.

As I started spending more time with Tiffany, your protective side came out, always finding a way to sit between us on the couch, like a 17th century chaperone; but coming to trust and accept me into the family. You knew when I was having a bad day and would forgo your feisty side and just sit in my lap for hours (only occasionally stabbing my leg as you tried to help). I don’t think you ever forgave us for removing the pictures above the bed that were just at tail height so you could rattle them in the morning to make sure that we got up not so much to feed you, but since you were up, shouldn’t we be?

There are so many good memories I have; I just wish we had had more time to make more. Wherever you are, take care, you will always be remembered in my heart.

Farewell, I miss you.

-Jeremy, your friend


I am a big fan of John Gruber’s website Daring Fireball. He always has good information and I appreciate his opinions on technical issues. (And yes, I have been supporting him for years, I have a whole collection of t-shirts.). With the whole brouhaha surrounding AT&T’s new data plan pricing, I have been getting increasingly frustrated with all the moaning and bitching people have been doing lately. Gruber has been covering it and agreeing with some of it, so as I was getting fed up this morning, I penned a long email to him. As I don’t want him to be the only recipient of my wisdom, here it is.

John,

I’ve been getting increasing frustrated with the whining I am hearing all over the net regarding the changes to the AT&T fees for data. I am hardly an AT&T fanboy, but this has just gotten ridiculous, I think people are just looking for anything to bitch about.

There seem to be two issues at stake here: data costs and tethering.

Data Costs:

For anyone who uses < 2GB month, this is a better deal by at least $5/month. Simple, no questions asked. If you always use < 200MB a month, you save $15/month. Also simple.

</p>

If you occasionally go over those limits, it is still probably a good idea. In my case (and I would have thought I was a power user, but I guess I find WiFi more often than not), I looked at the last 7 months and I went over 200MB once. So, I would have to had paid an extra $15 that month, but saved $15/month the other 6. Still a good deal.

As for that tiny percentage who uses > 2GB/month consistently, ok, fine, they can bitch, but I want pictures of their data usage.

Tethering:

Yes, AT&T is charging people $20 for no more data usage, BUT, people forget that AT&T is a business therefore they are not setting their $25/month price to cover the cost of every user using 2GB/month. If they did that it would be much more expensive. They use an average value.

This is where the extra cost from tethering comes from. I believe it is safe to assume that an iPhone user without tethering will use less data on average than a user with tethering. AT&T believes they can make a profit by charging handset users $25/month based on “average” consumption and tethering users $45/month because their average usage will be higher.

Is any of this “fair?” Business isn’t about being fair, it is about making money. And that is what AT&T is doing. If someone doesn’t like it, switch carriers. But if they are going to keep their iPhone and AT&T, I wish they would stop complaining.

Thanks for being the recipient of my minor rant. As always, I love your coverage.

-Jeremy


When we last left our intrepid hero (geek), had convinced myself that the new Western Digital drives with the 4kb block sizes operated perfectly fine under OpenSolaris, but that the frankenstein of my old server needed to be replaced. I figured if I was going to do this, I was going to do it right. And given the goal of this machine was storage, which meant fast network, lots of SATA ports, and lots of drive bays. So, thanks to the wonders of the internet and Newegg, I found the following items.

If you look at the specs and are counting, you will see that this gives me a combined total of 14 SATA ports, more than enough for expansion as the case has 12 drive bays. The primary storage is the 6 new SATA drives attached to the LSI SAS card. I also brought over the 3 remaining 1GB SATA drives I had from my old server attached to the motherboard SATA controller. I don’t trust them completely anymore, but they make a nice big scratch space volume.

Of note is also the challenge I had with getting enough working drives. I needed 6 hard drives to build the system, but I had to order a total of 8 before I had 6 working ones. I’m not sure what to take away from this. Obviously, I don’t like the idea of drives dying, but I think I much prefer DOA drives than ones that will fail later. But these are obviously consumer grade devices, which is why they are going into a dual-parity RAID system because I don’t trust them.

In my old server, I used an old ATA drive as the boot drive. It was small (capacity) and loud and probably used up more power than it should. This time, I wanted something small for the boot drive, since I didn’t really care about it terribly much. So, for the new server, I took a 2.5” laptop SATA drive and plugged in a SATA->USB adapter and am using that as the boot drive. It doesn’t take up much room and draws very little power.

So, after all the parts arrived and 2 rounds of RMAs with dead hard drives, I built the new system. Obviously OS installation was the first step. So I temporarily hooked up a DVD drive into the system and booted it up. BIOS, check. Bootloader, check. Kernel loaded, check. Detecting devices………….nothing. Shit.

I ended up doing quite a few steps at this point of debugging, trying older and newer versions of OpenSolaris. No change. I finally learned how to boot the kernel such that it drops into the kernel debugger (if that doesn’t make you run in fear, it should) and get it to boot while spitting out lots more debug info. The specific commands for anyone coming to this page with similar troubles were to first add the parameter -kdv to the kernel line in GRUB, then when you get dropped into the debugger, enter the following items.

use_mp/W0
moddebug/W80000000
::cont

This hardly gave me a culprit. But I noticed after enabling and disabling various things in the BIOS that the problem always seemed to occur around the time the system loaded the ATA drivers. As the only thing attached via ATA was the DVD drive, I researched in that arena. (My apologies that I don’t have all the links I used to find the info here, but I was more concerned with getting it working than preserving the link history for posterity, but you do get my solutions.) In the end, I found other people who reported that their CD/DVD drives did not function properly with DMA enabled. I added the following option to the kernel boot:

-B atapi-cd-dma-enabled=0

And, voila! The system booted into the installer. To be honest, this would not work as a long term solution, not using DMA for the DVD drive makes it slow and bogs down the rest of the system, but since the only use of the drive was for install, I didn’t care.

The install of the base OS went smoothly as did the upgrade to the latest development release. While in general I will say the userland tools for OpenSolaris are lacking as compared to Linux and even that the package tools are lacking in general, upgrading the whole system is a breeze.

pkg install SUNWipkg
pkg set-publisher -O http://pkg.opensolaris.org/dev opensolaris.org
pkg image-update

Next came the CIFS (Windows file sharing) server.

pkg install SUNWsmbskr
pkg install SUNWsmbs
svcadm enable -r smb/server

Now on to building the new drive arrays. Given that I never make things easy on myself, I still had a slight conundrum. One of the 6 new drives that was destined for the new machine was still in the old server as the replacement for the drive that died. So that meant that I only had 5 drives for the new machine where I wanted 6. AHA! but this was going to be a dual parity RAID system (raidz2 to be specific). This means that once operational, it can sustain a 2 drive failure without loosing data. The question arises, can you create an array from scratch that is in a degraded state and then add the drive later? The answer is yes! The basic idea is to create a sparse file to be a temporary “drive” for array creation. Then immediately remove it from the array (degrading the array) since it won’t actually work to store data on it since there wouldn’t be enough drive space on the boot drive to hold all the parity. Then, once the data is transfered, “replace” the fake drive with a real one and let the system heal itself.

# Create temporary "drive" so I don't loose
# redundancy on source array
mkfile -nv 1500g /TEMP-DRIVE
zpool create -f nas raidz2 c9t0d0 c9t1d0 c9t2d0 c9t3d0 c9t4d0 /TEMP-DRIVE
zpool offline nas /TEMP-DRIVE
# Do all of the transfers and install 6th drive
zpool replace nas /TEMP-DRIVE c9t5d0

All good! Now on to transferring the data from the old system to the new. One of the (many) awesome features of ZFS is the ability to snapshot filesystems. On top of that, you can easily serialize these snapshots and send them around and then have them unserialized on another filesystem or even another machine. A great feature is that I could be transferring data to my new server without having to stop using my old one for the bulk of the time. Again, the basic structure was to:

  1. snapshot the old system and transfer the data over to the new server (while still being able to use the old server). This took about 20 hours, but as I had no loss of usage, that is ok.
  2. Stop using old server, take another snapshot, and transfer it to the new server. The delta between the first snapshot and the second is small, so the transfer time was also small (less than an hour).
  3. Power down the old server, start using the new server

And here are the gory details:

zfs snapshot -r nas@transfer-primary
zfs send -v -R nas/scratch@transfer-primary '
    ssh gothmog@rivendell.home pfexec /usr/sbin/zfs recv -v -u nas/scratch
zfs send -v -R nas/av@transfer-primary '
    ssh gothmog@rivendell.home pfexec /usr/sbin/zfs recv -v -u nas/av
zfs send -v -R nas/backup@transfer-primary '
    ssh gothmog@rivendell.home pfexec /usr/sbin/zfs recv -v -u nas/backup

# Stop all use of system on bywater

zfs snapshot -r nas@transfer-final
zfs send -v -R -i nas/scratch@transfer-primary nas/scratch@transfer-final  '
    ssh gothmog@rivendell.home pfexec /usr/sbin/zfs recv -v -u nas/scratch
zfs send -v -R -i nas/av@transfer-primary nas/av@transfer-final '
    ssh gothmog@rivendell.home pfexec /usr/sbin/zfs recv -v -u nas/av
zfs send -v -R -i nas/backup@transfer-primary nas/backup@transfer-final  '
    ssh gothmog@rivendell.home pfexec /usr/sbin/zfs recv -v -u nas/backup

Honestly, not terribly gory for what I did. Note that transferring that way brought along everything about the filesystem, including the NFS exports, the CIFS exports, permissions, ACLs, everything. Very painless.

At this point I was largely done. But as I mentioned in the hardware list, I didn’t get the SAS card until later. I realized that it was a waste to not use the 3 old 1TB drives from my old server. Given their age and the failure of one of them, I didn’t trust them lots, but using them to move my network scratch space off the raidz2 array to a separate striped array seemed like a good use. As I had no extra SATA motherboard ports, I needed an external card. The LSI cards had been recommended to me, so I picked one up. It is very well supported by OpenSolaris. I changed the firmware on the card to the IT firmware (that is the non-RAID firmware since I was not going to use the hardware RAID) which took a little work to get a DOS boot environment, but that is another entry.

As it is the better card, I wanted the raidz2 array on the new card. Again, ZFS to the rescue. Under Linux this would have been a pain, but it was extremely simple under OpenSolaris.

# prepare the array to be moved
zpool export nas
# power off box, swap cables to new card, power up
zpool import nas

That was it.

Since then, I have installed a few more things to get things working well.

Simply put, I am thrilled with the new machine. I expect it will last me for quite a while. Oh, the Barad-dûr reference? You will have to wait for the pictures.


A week or so ago, in my weekly status email from my OpenSolaris NAS, I got an unfortunate notice.

status: One or more devices has experienced
   an unrecoverable error.  An attempt was made
   to correct the error.  Applications are unaffected.

The last sentence is the welcome news that one gets from using cool things like ZFS. But, the first sentence strikes fear in the heart of someone who has nearly 3 Terabytes of data stored in that drive array. So, it was clearly time to replace the failing drive and get the array back up to full redundant goodness. The simple solution would have been just to replace the aging drive and be done with it. But since I am at about 90% capacity on the array, I figured I would take the opportunity to grow the size of the array by replacing not just the single failed 1 terabyte drive with a new one, but purchase 4 new 1.5 terabyte drives. I know there are 2 terabyte drives out there, but they are significantly more expensive and I didn’t want to go there.

Here is where I started losing the battle which later ensued. I was traveling in Boston at the time and decided to order the drives while I was on the road so that they would arrive soon after I returned and I could get things moving faster. I settled on the Western Digital Caviar Green WD15EARS 1.5TB 64MB Cache SATA 3.0Gb/s 3.5” Internal Hard Drive from Newegg. I liked the low power consumption, the large size, and the cool running. However, I did not do my homework on Western Digitals new “Advanced Format” feature.

This feature changes the atomic storage unit on the drive from 512 bytes to 4 kilobytes. In the abstract this is good. It means that the drive itself is more efficient and needs less overhead on the platter to store error correction information (ECC) resulting in more storage space. However, it means that if you ever try to write out less than 4 kilobytes of data to the drive, it needs to read the entire 4 kilobytes from the drive platter, “merge” in your new data, recalculate the ECC, and write it back out. Again, doesn’t sound bad. But if your operating system assumes that the disk block size is 512 bytes, it always writes in chunks that are aligned on 512 byte boundaries, not 4 kilobyte boundaries, resulting in TONS of read-modify-write cycles, absolutely killing throughput.

Guess what assumption OpenSolaris makes? If you guessed 4 kilobytes, you are wrong, and don’t know Murphy too well. 512 bytes it is. So, while the drive will work, it’s performance will absolutely suck. I mean bad. The system was estimating it would take 4.5 DAYS to move about 800 gigabytes of data onto the new drive. And while 800 gigabytes is a lot of data, it should not take that long.

So, turns out I was wrong, Solaris does just fine with the drive, it was just my assumption of speed that was wrong. I will not go into the day long gnashing of teeth that occurred as I tried different things and tested various drives on the machine. In the end, I convinced myself that the drive was operating acceptably and that it would in fact take 3 days or so to do the resilvering.

But, my efforts also brought out the kindness of the internet and old friends. I posted a frustrated tweet and a good friend of mine from college, Monique, emailed me saying that her husband was an expert in Solaris, ZFS, and hard drives and asking if they could help. Eric and I emailed back and forth for much of the day, me sharing what I had done and thoughts I had about the problem, and he helping me with ideas and questions. I can’t thank them enough and I certainly owe each of them a few beers (or other beverage of choice) next time I see them.

But, in all of this, I realized the frankenstein machine I have built for this data is, well, a frankenstein of a machine. And as this has become my central data store at home, I decided to engage in some retail therapy and have purchased the parts for a new, better computer to house the system. This one will have room for 9 drives, if I ever get up to that point, and a new motherboard/CPU/power supply to more comfortably handle the system. It will no longer be shoehorned into a Dell case that was not designed to hold that many drives. So, I am looking forward to building my new system when all the parts arrive later this week (hopefully). If I think of it, I will take pictures and post a montage.