May Contain Blueberries

the sometimes journal of Jeremy Beker


I have been hearing about Tailscale for a few years now. I think it was probably first on the Accidental Tech Podcast by Casey Liss and I honestly dismissed it as another one of his wild goose chases (see raspberry pi garage door sensor). But it has popped up more than once and I recently listened to an interview with Tailscale’s Co-founder, Avery Pennarun on Stratechery that I found very interesting. His road to networking and computers felt very similar to mine although he obviously carried it quite a bit further by founding a company. I had been noodling with the idea of trying it for my use case and this pushed me over the edge.

I already have a very well developed home network so I am not the traditional user. I am some odd hybrid of a “personal” use case and a level of “business” complexity. I have two use cases that I currently use OpenVPN for.

  • Remote access for my and my wife’s personal devices when we are outside the home. This is mostly to route internet traffic through our home connection when we are on untrusted network, but occasionally to access services hosted at home.
  • To connect two servers that exist outside our home network to the home network for more secure and easy access.

Many people seem to use Tailscale to connect all of their devices. That was not of interest to me. I already have the physical network at home set up and segmented the way I want it and recreating that with the mesh network was of no interest to me (and would involve dozens of devices, not all of which are capable of running it). So my initial goal was to have a simpler to manage version of what I was doing with OpenVPN.

I chose to set up my OPNsense router as the central node of the network and make it both a subnet router and exit node. The big change that I made from what it seems is a “standard” installation is normally an exit node will SNAT the packets going out so that they seem to be coming from the exit node. I am perfectly happy having the 100.0.0.0/8 addresses coming into my internal network. I want the tailnet to appear just as another segment to my network so that even non-Tailscale can send traffic to hosts inside the tailnet.

In order to do this, I configured the core router with:

tailscale up --snat-subnet-routes=false

Because Tailscale was no longer doing the SNAT, I also had to configure OPNSense to do the proper outbound NAT for anyone with those addresses. And I set up my DNS server to forward requests for hosts in my ts.net MagicDNS zone to the Tailscale DNS server 100.100.100.100.

I set up the core router to also advertise several routes for my internal network (to subnets that I trust the clients to get to). When installing tailscaled on my two remote unix hosts I needed to explicitly tell them to use those routes via:

tailscale set --accept-routes=true

It took me less than a day of putzing with things occasionally to have it all set up. The “fixed” servers could talk to hosts in my local network. All the clients inside my network could talk to the hosts that were connected via Tailscale using their 100.0.0.0/8 addresses or MagicDNS names even though they dont have tailscaled running. And for remote access on my mobile devices, I set up the Tailscale app and they are able to connect and use my home network as an exit node.

Overall I am pleased with how easy it was to set this up. I was a little nervous of the security implications but was pleased to see the Tailnet lock function that prevents unauthorized devices from being added to your network even if Tailscale is acting maliciously or compromised.

So far so good. I still need to see how well Tailscale can punch through restrictive networks to build tunnels. But happy so far.


Two posts in a few days, huh. I saw a tweet (xeet?) that I won’t link to because I hate what that platform has become, but it talked about the wonders of the new Claude Code tool. I figured I would give it a shot and it is pretty impressive. I had a few goals I wanted to accomplish, upgrading some components of my blog engine and CSS stuff that I hate doing. So I asked it to do it for me. Below is a summary (produced by Claude) of the changes it made for me (we made together?). All this ended up costing me $1.83. A bit expensive and I could see it being very expensive if I wanted to use this all day for my job. But an interesting experiment in what is to come.

Everything below the line here was written by Claude.


Today I spent some time modernizing the technology stack that powers this blog. Over time, web technologies evolve, and it’s important to keep up with these changes to maintain security, performance, and compatibility with modern browsers.

Bootstrap Upgrade: From 4.6.2 to 5.3.3

The first major change was upgrading from Bootstrap 4.6.2 to the latest Bootstrap 5.3.3. Bootstrap 5 brings several important improvements:

  1. Dropped jQuery dependency - While my site still includes jQuery for certain functionality, Bootstrap 5 no longer requires it as a dependency, making for lighter and faster pages.

  2. Improved CSS variables - Better support for CSS custom properties, making theme customization easier.

  3. RTL support - Bootstrap 5 has better support for right-to-left languages through logical properties.

  4. Upgraded Popper.js - Moved from the older popper.js library to the newer @popperjs/core package.

The upgrade process wasn’t entirely straightforward. Several changes were needed:

  • Updated the package.json file with the newer dependencies
  • Modified the Makefile to reference the new package locations
  • Updated CSS classes like float-right to their Bootstrap 5 equivalents (float-end)
  • Adjusted SCSS imports to accommodate the new Bootstrap file structure

Jekyll Upgrade to 4.4.1

In addition to Bootstrap, I also upgraded the Jekyll static site generator from 4.3.2 to 4.4.1. Jekyll 4.4.1 includes various bug fixes and performance improvements that make the site generation process faster and more reliable.

Sass Warning Challenges

One interesting challenge during this upgrade was dealing with Sass deprecation warnings. The current version of Dart Sass (which Jekyll uses to process SCSS files) warns about several deprecated features that Bootstrap 5 still uses:

  1. @import rules - These are being replaced by the newer @use and @forward rules
  2. Global color functions (red(), green(), blue()) - Being replaced by namespaced versions (color.channel())
  3. Other global functions like mix() - Being replaced by color.mix()
  4. Mixed declarations in CSS that will behave differently in future Sass versions

While these warnings don’t affect functionality today, they indicate future changes that might impact the site. A full resolution would require creating custom Bootstrap wrappers using the modern Sass module system, but that’s a project for another day.

Ruby Version Requirements

This upgrade also highlighted how important it is to maintain compatibility with the right Ruby version for Jekyll. I discovered that Jekyll 4.4.1 works best with Ruby 3.2.3, not the newer 3.3.x versions which caused some compatibility issues with the Jekyll logger.

In Summary

These updates represent the kind of regular maintenance that keeps a website healthy and current. While users might not immediately notice these changes, they lay the groundwork for future improvements and ensure the site continues to work well with modern browsers.

The upgraded packages are now being tested on a separate branch before being deployed to production. This approach helps catch any compatibility issues or styling problems before they affect the live site.

There’s always more to do in the world of web development, but keeping dependencies updated is a vital first step in maintaining a healthy website.


There is a joke somewhere about opinions and assholes. It seems to me that writing about AI and its impact on everything has been proving that to be true. Yet, here I am, adding more to the pile. In my defense, I am writing this mostly to settle my own thinking and put down my current opinions for reference so that when I am wrong, I have something to look back on later.

There have been rumblings about the power of LLMs and “AI” for quite a while but for me it became real after Github released Copilot in 2021. This is the point where as a software engineer, it had the opportunity to have a direct influence on my livelihood and lots of people started opining on how LLMs would change the world forever, be a flash in the pan, or maybe end the world.

As a geek, I was pretty curious. I made some early attempts at running some of the AI models locally on what at the time was a beefy GPU. I was not impressed. It was very difficult to get running, the results were mediocre at best, and using just the sound of the fans as an indicator it was hugely wasteful. It seemed like the biggest usecase waas making bad pictures with too many fingers. I dismissed LLMs and “AI” as a crutch that was decades if not longer away from being anything useful. I think this was partly my pride as staff engineer, partly my disdain for the search for silver bullets that would make everyone a successful software engineer, and my limited experience with LLMs at that point and their propensity for making shit up (aka hallucinating).

Then in 2023 I had an idea that started to change my mind. I still firmly believed that LLMs hallucinatory behavior was bad but there was a crossover to my role as a dungeon master for a few Dungeons and Dragons campaigns where hallucinating could be good thing. So I tried something

Write a damage liability waiver form for a dungeon where fantasy adventurers must fight undead creatures for practice. Include standard liability legalese. The name of the dungeon is The Dungeon of Darkness and the proprietor is Berala Evenfall.

Thus began my tentative use of LLMs. There is a lot of reasonable debate about using LLMs for generating creative content. I am uncomfortable that there are companies that have shamelessly used content without compensating the original authors to build billion dollar companies. But to expand the enjoyment of my players in a casual setting I found them to be very useful. From writing deeper backstories for minor NPCs to fleshing out my world building it helped me have more fun playing D&D and I think that was a good thing. I confined LLMs to making shit up in an arena where making shit up quickly was their strong point.

Tangent: I do ponder at times that there is a strange parallel between AI training and human learning. If I think of the human brain, what do we do? We are pattern matching engines. We take new situations and compare them to all of the experiences we have had in life to that point and try to figure out the most likely response to achieve the outcome we want. That doesn’t feel all that different to me from what is being done with LLMs. Even with human’s, there is a fine line between being “inspired” by the content I have freely been exposed to in life and plagiarizing it. Back to the original story.

Sometime later in 2023 I got access to use Github Copilot. And as much as I hated to say it at the time, I was grudgingly impressed. I did not see it as any kind of existential threat to human developers but it was definitely a useful tool. It often produced a good first draft of what I was working on; enough that I could more quickly fix what it had done than typed it from scratch by myself. Or to help me document what I had just written in a way that would be helpful to future developers by commenting things more completely. However, I found that it had an interesting bimodal distribution of results. Sometimes it would pre-generate a brilliant implementation of what I had barely started to do myself while at other times it would stubbornly produce absolute garbage over and over. It made me realize that my mental model for how these tools can fail is nothing like how I think of code written by other humans.

That realization is I think what concerns me most as LLM and AI tools advance. For the whole of human history, when interacting with people around us, we always have to make judgements on the veracity of what we are seeing, what they are doing, and if we should trust them. And this is even before we take into account potential malice or intentionally lying. The user experience of AI technology is to create tools that we interact with similarly to other humans. However, the failure modes of LLMs do not mirror that of people. For example, we expect a reasonable human to be able to say “I don’t know” if they are not confident in their answer (and we distruct people who lack that ability). AI tools? Not so much. See using glue to keep pizza cheese on. We have been conditioned (incorrectly I might add) by pre-LLM software to believe that computers do not make mistakes. This brings about a huge risk that our brains are just not set up to determine how much we can trust the information we get from them.

My current use of AI tools is sporadic. Copilot is part of my development workflows and sometimes it is useful and othertimes I will go days without ever using it. I have also started dabling with running some of the models locally again. The tooling has gotten far better (I use ollama and Enchanted) and being able to run the models on my Macbook Pro has significantly improved the experience. There is something magical about being able to do a query that is basically using the whole of the internet locally and offline from your laptop to find answers. It is a nice backup. And being able to have multiple models that I can run side by side to compare allows me to find the right tool for the job. Analagous to finding different human experts. I also feel that I am learning where these tools fall apart and where I need to be wary.

So where does that leave me in early 2025? My biggest error early on was forgetting how quickly technology can move. I feel like I have been watching the technology improve faster and faster right before my eyes. There was a quote I heard on the Stratechery podcast (not original I am sure) from Ben Thompson about progress that we can go “decades with years of progress and then years with decades of progress.” I feel like we are in the second situation. How long will it last? No idea. But I don’t think we are done seeing rapid growth.


Shortly after I started my new job at Kolide in March, I was presented with a very interesting problem. We dynamically create views in our database to perform certain queries that are specifically scoped within our system. These views are then queried by our webapp before being destroyed when they are no longer needed. This had been working perfectly fine since the original feature was created, but due to the fact that the view needs to be created, all of the work had to be performed on our primary (writer) database. This presented several problems:

  • It added load to our primary database for what was functionally a read-only operation
  • It required that we do some machinations to switch users to ensure that the view was restricted as much as possible
  • There was a danger that an out of control query could impact our primary, writer database

My project was to come up with a way to do the same operations, but use our read only replica to perform the queries themselves to isolate them as much as possible.

The high level solution was pretty straightforward:

  • Create the view on the primary
  • Execute the query using the view on the read-only replica
  • Remove the view on the primary

A simple implementation would look something like this in ruby:

ActiveRecord::Base.connected_to(role: :writing) do
  create_view_function
end

ActiveRecord::Base.connected_to(role: :reading) do
  execute_query_using_view
end

ActiveRecord::Base.connected_to(role: :writing) do
  delete_view_function
end

However, if you run this code, you will fail on execute_query_using_view with an error stating that the view does not exist. And this is where things get complicated.

Tangent on Postgres replicas: When you have Postgres databases set up in a primary-replica relationship the replica is kept up to date by receiving updates from the primary as changes are made to the primary database. This is done through the WAL (write ahead log). Before the primary database writes out data tables to disk, the complete set of transactions are recorded in the WAL. This allows the primary to recover the database to a consistent state in the event of a crash. But it also facilitates replication. The replica streams the contents of the WAL from the primary and applies those changes to its data tables resulting in an eventually consistent copy of the primary. And therein lies the rub, the replica is potentially always a little behind the primary.

So the error we received that the view did not exist on the replica is really that the view does not exist on the replica yet. The replica has not had enough time to replay the WAL and get the view in place before we tried to query against it.

This requires a more refined solution then:

  • Create the view on the primary
  • Wait until the view has been replicated
  • Execute the query using the view on the read-only replica
  • Remove the view on the primary

Tangent on transaction IDs: Every transaction that is committed to the database is assigned a transaction ID. Per the documentation “The internal transaction ID type xid is 32 bits wide and wraps around every 4 billion transactions.” There are useful functions that can be used to query the status of the transactions that have been committed to the databases.

So, in order to make this work, we have defined two functions that can be used to help understand what has been committed and where.

#
# Return the txn id of what has actually been written out from the server's internal buffers
#
def committed_wal_location(connection)
  # Get the current writer WAL location
  results = connection.exec_query("select pg_current_wal_lsn()::text")

  # the result is a single row with a single column, pull that out
  results.rows.first.first
end

and:

#
# Checks to see if the connection passed has replayed the txn to its tables
#
def has_replica_committed_wal_location?(connection, wal_location)
  # create bind variable
  binds = [ ActiveRecord::Relation::QueryAttribute.new(
      "id", wal_location, ActiveRecord::Type::String.new
  )]

  # Get the current writer WAL location
  results = connection.exec_query("select pg_wal_lsn_diff ( pg_last_wal_replay_lsn(), $1 )", 'sql', binds)

  # the result is a single row with a single column, pull that out and convert to a number
  # negative means that this connection is behind the WAL LSN that was sent in (lag)
  # zero means we are in sync
  # positive means the replica is in the future, shouldn't happen :)
  !results.rows.first.first.to_i.negative?
end

Using these methods, we can build a more robust system that accomplishes our goal:

ActiveRecord::Base.connected_to(role: :writing) do
  # Check out a connection to the pool and return it once done
  ActiveRecord::Base.connection_pool.with_connection do |connection|
    begin
      # Generate dynamic views
      connection.exec_query_with_types(create_statements)

      # Explicitly commit this so that it gets pushed to the replica
      connection.commit_db_transaction

      # Get the WAL current position so that we can wait until the replica has
      # caught up before executing the query over there
      view_creation_timestamp = committed_wal_location(connection)

      # Run the query on the replica
      ActiveRecord::Base.connected_to(role: :reading) do
        ActiveRecord::Base.connection_pool.with_connection do |ro_connection|
          # Verify that the replica is at least up to date with the primary before executing query
          retries = 0
          while !has_replica_committed_wal_location?(ro_connection, view_creation_timestamp)
            raise ReplicaLagError.new("Replica has not committed data in #{retries} tries.") if retries > MAX_REPLICA_RETRIES
            retries +=1

            # Exponential backoff to wait for replica to catch up
            sleep(0.1 * retries**2)
          end

          results = ro_connection.exec_query_with_types(view_sql)
        end
      end
    ensure
      # Drop the views that we created at the beginning
      connection.exec_query_with_types(drop_statements)
    end
  end
end

With this code, we accomplished our goal of being able to create views dynamically, use views to restrict data access, but still gain the advantages and protections of actually executing queries against the views on our replica systems. As of this writing, this has been in production for several months without any issues.


I love tracking data. This can be in the form of computer and network tracking using Zabbix, personal health data through my Apple Watch and Apple Health, or, as is the subject of this post, weather data. But sometimes getting things working can be really frustrating and I wanted to document what I have found setting up my most recent acquisitions from Netatmo so that someone else can hopefully benefit from it.

I have had a Netatmo weather station for a few years now and have been quite happy with it. I recently acquired another indoor station as well as a rain gauge and wind gauge. Setting those up is where I ran into some issues but finally solved. I received the additional indoor unit first and setting it up using the iOS application went as expected and things started working just fine. I only started having issues when the outdoor rain and wind sensors came.

I tried setting them up using the iOS application and they would never connect to the main unit. No matter how many times, it just failed. Upon searching the Netatmo support site, I found a reference to a Mac application that could be used to add modules. While very finicky and repeatedly telling me it had failed to add the modules, it in fact did.

Unfortunately while the rain and wind sensors worked, the outdoor and additional indoor module now failed to work. So I removed the two units and used the iOS app to add them back. But now the rain gauge and wind sensor failed.

After much trial and error I removed all of the modules and then re-added them all using the Mac application. This is what finally got them all operational at the same time. My theory is that the two methods are subtly different and overwrite data for the other.

So if you have issues (or honestly even if you don’t) I would recommend using the Mac (or Windows) application to add all Netatmo modules.

Now back to enjoying all my data.


I always enjoy seeing how others have set up their work environments but I realized that I have never shared my own. I’ll do my best to give a list of all the items in the picture as well as what I have tried to achieve.

Philosophically I want a very clean workspace. I find that if my surroundings are neat and clean that I am able to focus more on whatever I am doing whether work or play. This applies particularly to what is in my eyeline as I am at my desk. The rest of the room is generally neat, but if I can’t see it directly, I worry less about it. More specifically this means not having clutter on my desk especially when it comes to wires. My goal is to have as close to zero wires visible from where I sit. I am happy that I have achieved that as much as I think possible at this point.

The Desk

The desktop itself is part of an old desk that was given to me by a dear friend when I got my first apartment over 20 years ago. It is oak veneer over particleboard, I think, but it is very solid and very heavy. I converted it years ago to have metal legs which is why you can see the intentionally exposed ends of carriage bolts at the corners. Those are no longer used but I still like the look. About a year ago, I replaced the legs with adjustable legs from Uplift. They are not the cheapest but the quality is excellent and the support from Uplift is great. I had one leg that started making an odd noise and they sent me a replacement no questions asked.

Yes, there are many computing devices on my desk. Here is the rundown.

Computers

  • minas-tirith - Apple MacBook Pro (16-inch, 2023) with an M2 Pro - This is my work computer in a Twelve South BookArc partially behind the right hand monitor.
  • hobbiton - Apple MacBook Pro (16-inch, 2021) with an M1 Pro - This is my personal computer.
  • minas-ithil - Mac Pro (Late 2013) with the 3 GHz 8-Core Intel Xeon E5 - I always wanted to have one of these and I recently came into one. It is used to run a few applications that I want always on. It also looks pretty on my desk.
  • sting - iPhone 14 Pro on a Twelve South Forté. Currently running the iOS 17 beta so I can use Standby Mode to show pictures.
  • narsil - 12.9” iPad Pro (5th generation) with an M1 and [Apple Pencil]

Computer Accessories

  • 2 LG 4K Monitors (27UK850). They are fine, nothing special about them.
  • Amazon Basics dual VESA mount arm. Nothing special here either, it works. I don’t move the monitors around any, so it does an acceptable job holding them in place.
  • Caldigit TS4 - This is the center of my system. It is a single Thunderbolt cable to my work MBP and all accessories are plugged into it. Not cheap, but rock solid and solves for my “no visible cables” policy. It connects to both monitors, all the USB accessories, and wired GbE to my home network.
  • Caldigit TS3+ - I had this before upgrading to the TS4. I still use it to attach my personal MBP to things.
  • Apple Magic Keyboard with Touch ID and Numeric Keypad - I like the dark look and the Touch ID is great for authenticating especially for 1Password.
  • Apple Magic Trackpad - I have been known to switch back and forth from a trackpad to a mouse, but I am in trackpad land now.
  • Dell Webcam WB7022 - I got this for free at some point and I mostly love it because it reminds me of the old Apple iSight camera.
  • Harmon/Kardon Soundsticks - These are the OG version of these from around 2000 that I got with an old G4 Powermac. They still sounds great.
  • audio-technica [ATR2100x-USB] microphone - Got this on the recommendation of the fine folks at Six Colors.
  • Elgato low profile Wave Mic Arm - I had a boom arm before but I hated having it in front of me. This does a great job of keeping the mic low and out of camera when I am using it.
  • Bose QC35 II noise cancelling headphones - I don’t use them at home all that often but they are amazing for travel
  • FiiO A1 Digital Amplifier - This is attached to the TS4 audio output and I use it to play music to a pair of Polk bookshelf speakers out of frame.
  • Cables - I generally go for either Anker or Monoprice cables where I do need them. I like the various braided cables they offer.

Desk Accessories

  • Grovemade Wool Felt Desk Pad - In the winter, my desk gets cold which makes my arms cold. This adds a nice layer that looks nice and absorbs sounds as well.
  • Wood cable holder - I didn’t get this exact one but I can’t find where I originally ordered it from. The idea is correct in that it allows you to have charging cables handy but not visible. Really a game changer for having cables nearby but out of sight.
  • Field Notes notebooks - I like to keep a physical record of what I work on. There are no better small notebooks than these.
  • Studio Neat Mark One pen - I am not a pen geek, but this one is just gorgeous and I love writing with it.
  • Zwilling Sorrento Double Wall glass - This is great for cold beverages in humid climates as it won’t sweat all over your desk.

Lighting

  • 2 Nanoleaf Smart Bulbs - These are mounted behind my monitors facing the angled wall to provide indirect light

Underneath

I am actually pretty proud of the contained chaos that is under the desk. It is kept in place enough that I don’t hit cables with my legs which is more than I can say for many of my desks in the past.


Last Friday in my 1:1 with my boss, I came in quite frustrated as a result of a code change that I had to roll back that morning. He rightly observed a few facts. 1) I have only been at my new job for 2 months, so I am still new to the code so I needed to cut myself some slack, but more importantly 2) he has observed that I feel that I have failed if something does not work well the first time I deploy it. While related to 1, it speaks to a good observation about myself and also another subtle learning I have to make about my new role.

The less interesting point is that I am a perfectionist and I need to work on being more forgiving with myself. Noted.

The more interesting point I think is to expand on his statement “I expect that my code will work properly when I release it the first time.” I thought about that a lot over the weekend and I actually think there is an addition to that statement that explains my frustration better. “I expect that my code will work properly when I release it if I expect my code to work properly.” This is a weird circular statement; let me explain.

When I work on a code change, I have an internal sense of how I want the code to work, how risky I think the change I am making is, and the areas where I think there could be problems. A lot of this is intuition. And I do not propose to release my code until I have reached a certain comfort level that I understand the “shape” of the code and its impact on the system when it is released. The frustration came from my intuition of the risk of what I was releasing being way off base.

This intuition will only come with time so for now all I can do is be kinder to myself until it gets more accurate.

(Apologies for the bad title. For whatever reason I couldn’t get past mixing Great Expectations with The Wrong Trousers.)


The last time I went through a job transition, I was moving from a role in technical management to being an individual contributor again. This involved picking up a new programming language, a new framework, and a new industry. I knew that I would be starting from scratch in a lot of ways; hoping my experience would carry me through.

This new transition is markedly different. I have left a role as a very senior engineer where I had significantly more professional experience than my coworkers as well as significantly longer tenure at the company than all of my coworkers (by nearly 5 years). I exaggerate but it could be argued that I knew everything about everything of our technical stack and how we had gotten to where we were. My new role is using the same language and the same framework (but in a much more modern way).

Intellectually I knew that this would still be a big change, but I don’t think I appreciated how much of a shock it would be going to basically knowing nothing about anything. This has been the hardest adjustment to make. Coming to every problem with only basic knowledge; like knowing a language but never having spoken to a single person for whom it is their native tongue. My new coworkers have been amazing and are spending lots of time helping but this is a process that can’t be forced.

In my head, I imaging a vast map of all the knowledge I could know that is covered up like the map of an RPG. As I experience and learn, I open up different parts of the map, but they are islands. Over time I will start making the connections between different parts and that is when I truly can bring my experiences to bear and add more value to the team.

It has only been a few weeks, but I feel like some small connections are being made. I can’t wait until I get more. It is an exciting and sometimes exhausting journey.


Just a quick entry that will hopefully help someone else with a similar situation. (Pardon the ton of links and buzzwords, I want to make sure this is easily searchable.) In my network, I use the ISC DHCP server via OPNsense. Most of my hosts get fully dynamic addresses and I have the DHCP server register their names in DNS using RFC 2136. This works very well in that I don’t have to worry about manual IP allocation yet I can still use friendly hostnames to access systems. For the few systems that need staticly defined IP addresses, I have set up static leases via system MAC addresses. This has worked very well so far.

But I recently came across a new situation that hit a snag. As I will now be having 2 laptops that I use regularly, I wanted to be able to attach them to my desktop monitors via my CalDigit TS3+ which has a wired ethernet connection. This means that I will have two computers that will at times be requesting an IP address from the same MAC address. This isn’t an issue particularly except that macOS will set the hostname of the system based on a reverse DNS lookup of the IP address it receives. Given caching and the timing of the RFC 2136 updates, I would open a terminal on laptop minas-tirith and see my prompt saying my computer was hobbiton. This bothered me for aesthetic and possible confusion reasons.

DHCP has the ability to send a “client ID” when requesting an address and the ISC DHCPd server can do a static IP assignment based on that client ID. This seemed the perfect solution. I could set a client ID on each of the laptops for that network interface and each would get the proper IP, register a DNS name for that IP and all would be happy. My first attempt almost worked. I set the client IDs on both the laptops and the DHCP server did give them different IP addresses so I was confident that the client ID was being used, however, they were not the static assignments that I had set in my configuration.

Doing some searching I found very little about this problem but I saw a few mentions that indicated that DHCP clients often prepended some bytes to what they send as the client ID. I dug a bit into what was being sent by looking at the dhcpd.leases file on my server and lo and behold, that was what was happening:

lease 192.168.42.208 {
  starts 4 2023/03/02 12:08:46;
  ends 4 2023/03/02 12:10:05;
  tstp 4 2023/03/02 12:10:05;
  cltt 4 2023/03/02 12:08:46;
  binding state free;
  hardware ethernet 64:4b:f0:13:22:f6;
  uid "\000hobbiton";
}

The uid line is the client ID. The client was prepending a null byte at the beginning. So, I went back to my DHCP server and set the matching client ID name to be \000hobbiton. I renewed the lease and VOILA! I was now getting the IP address I assigned.

Another step in living the dual laptop life. Now if I can just find a good solution to using Apple bluetooth keyboards with TouchID…


As I mentally prepare for my new role and the new computer that will come along with it, it seemed like a good time to do some digital housekeeping. At Food52 I never had a company owned laptop so I was able to be a little more lazy about keeping work and personal things separate. But a new shiny M2 MacBook Pro showed up a few hours ago and I want to try to do things a bit cleaner now. In addition to that I wanted to improve some security and identity items.

Overall the setup went pretty well taking only a few hours. I’m sure I missed some things, but I am ready to get started!

SSH Keys

For the longest time I was a bit inconsistent with SSH keys. I wandered between them representing my person as an identity and having them represent me as a user on a particular computer. With the advent of being able to store and use SSH keys via 1Password, I wanted to clean things up. Using 1Password, it made more sense to treat keys as something that represented me personally without regards to the computer I am on. I reverted to having 2 keys stored in 1Password, a big (4096 bit) RSA one and a newer ED25519 one. I prefer the newer key but I have found that some systems can’t handle them so having both is nice. I cleaned up my access to various SSH based system and now have a simple authorized_hosts file with just 2 keys in it everywhere. (GitHub just gets the ED25519 one as they support it just fine.)

API Keys

I don’t have a lot of API keys right now but I assume I will have more in the new role. Another new 1Password feature (can you tell I am a fan) is command line integration for API keys. I had read about this when I was at Food52 but had not gotten around to setting it up. I did so this morning for a few keys I still have and it works really well. Excited to see how it works when I have a bunch more.

Software

You may already know about brew for installing UNIX-y software. But did you know about using a Brewfile? You can use them to install all kinds of applications automatically with one command. This simplified the vast majority of installs on the new laptop.

File Synchronization

The convenience of tools like Dropbox and iCloud Drive is pretty obvious. But for one like me who is very concerned with privacy (and likes futzing with tech and occasionally making things more difficult for myself), I don’t like the idea of keeping my sensitive data on someone else’s infrastructure in an unencrypted format. So, a number of years ago I started using Resilio Sync (at the time it was BTSync). This is a sync product that operates in a similar way to Dropbox but it is peer-to-peer between any number of computers you control. It also has the ability to set up read-only and (more interestingly) encrypted copies. This means I can have a replicated server that has all of my data but it is inaccessible to anyone who breaches that machine. This has allowed me to set up a few remote servers outside of my house that provide disaster recovery but are also safe from a privacy perspective.

As part of my cleanup, I made a new shared folder specifically for work files separate from my personal synced folders.