<![CDATA[Deitte.com]]>https://deitte.com/https://deitte.com/favicon.pngDeitte.comhttps://deitte.com/Ghost 3.35Sun, 01 Mar 2026 01:05:19 GMT60<![CDATA[This is not a manager README]]>I've had a manager's README for awhile. I have sent it out to people I manage as well as those I've hired. The initial articles I read on why to do it made sense to me. I like to make sure I'm communicating well to those around me, and this

]]>
https://deitte.com/not-a-manager-readme/5f8f957c0fada405d7aa414cSun, 25 Oct 2020 20:37:58 GMT

I've had a manager's README for awhile. I have sent it out to people I manage as well as those I've hired. The initial articles I read on why to do it made sense to me. I like to make sure I'm communicating well to those around me, and this was another good way to do that.

I've read more lately though that makes me think different. While some people have feelings more strongly than I do on their non-worth, some of the points in here do resonate a lot with me. What is this getting me that the first hour of conversation with the person doesn't already get? Am I keeping it up to date? What signals does it send to those who work with me?

I do think that the manager README can still be helpful, as a way to introspect on how I think. I also think it can be a useful tool when I'm recruiting someone, as almost marketing more than anything on how I work. Hopefully positive and truthful marketing!

And so with that, here's not a manager README.

]]>
<![CDATA[Learning for Team Leads]]>I've enjoyed helping a number of team leads grow into this role.  When doing this, I've started to have a pattern of steps I take to help out here.  I hope this is useful to other managers or those starting on this journey.

Step 1: Write down the role

Figure

]]>
https://deitte.com/learning-for-team-leads/5f8cbd760fada405d7aa3f94Wed, 21 Oct 2020 01:56:28 GMT

I've enjoyed helping a number of team leads grow into this role.  When doing this, I've started to have a pattern of steps I take to help out here.  I hope this is useful to other managers or those starting on this journey.

Step 1: Write down the role

Figure out what the ideal state is for the team lead role.  If you have an engineering-wide document on this, great.  If you don't have that, you could be writing the first draft.  Think about what is usually expected in this role, getting help from others if needed. Consider if there is anything unique for the team.  Write it all down, whether it seems obvious or not.   Try to group them together as applicable.  Some ideas, but not meant to be a complete list:

  • Provide guidance on technical issues
  • Coordination of any cross-cutting items with other teams
  • Be available for help and learning
  • Help new members or those struggling
  • Enter tickets or encourage others to
  • Oversee emergency issues
  • Prioritization advice and recommendations, both short and long term

Step 2: Figure out where growth is needed

Once you have the role written down, what has the team lead not done before?  Where may they need some help in growth?  Pick one or two items to start working on, finding concrete goals for them that can be achieved.

Make sure to talk about these every few months.  As the team lead dives in, what is working well and what still needs work will keep changing. Help them see the bigger picture of the many things they are already doing great and what still needs some growth.

Step 3: Connect with other team leads

Make sure the team lead is spending the time to learn from others and not just from you- hopefully there is already a team lead meeting and a group chat where they can learn more.

Step 4: Have conversations with articles

To continue to talk about being a team lead, and leadership in general, I find a series of articles is really helpful for having some conversations over the course of a year.  The team lead can pick and choose when they have the time, and you can both use time every week to discuss these when something has been read.

A book can work very well too I'm sure, or many different articles.  It's about having some good material that both people have read along with a conversation on what they learned, have seen in the past, disagree with, etc.

Here's my list.  I don't agree with all of all of these completely, but I've read them all and enjoyed discussing them.

Step 5: Find more avenues for continuous learning

Learning to be a team lead, like much of life, is a continuous journey.  These are some tips to help someone get started, but two places to consider going for more learning:

  • software lead weekly.  There are multiple good articles posted here every month, so many that couple be included in step 4 above.
  • The Rand leadership Slack.  There's an active channel for essentially every topic.

I'm sure there's many fantastic books and articles I'm forgetting above, or I haven't gotten to learn from myself.  

]]>
<![CDATA[Replacing an old release]]>I've been caught in jobs dealing with the pain of being on old releases and needing to do an upgrade- the uncertainty of an unknown upgrade path, reading through volumes of change logs, the discussions about the unknown unknowns, and the extra push to do upgrades better.  Given all of

]]>
https://deitte.com/away-from-an-old-release/5f8a5dcea3c3985dffe947c7Sat, 17 Oct 2020 03:10:37 GMT

I've been caught in jobs dealing with the pain of being on old releases and needing to do an upgrade- the uncertainty of an unknown upgrade path, reading through volumes of change logs, the discussions about the unknown unknowns, and the extra push to do upgrades better.  Given all of this, I am good about prioritizing upgrades at work... but apparently not for myself!  

I was on such an old version of Ghost used on this blog, I needed to start from scratch for upgrading.  My ghost-on-aws repository now has instructions that should work for others when using the latest Ghost.  I didn't include everything I did, like the new-to-old migration paths I did with EBS volumes, but it does include other new good things to do, such as HTTPS. And with "ghost upgrade", I'll now be able to be good to myself and stay on the path of incremental upgrades.

]]>
<![CDATA[New Blog, Old Again]]>I was handling some overdue, routine maintenance on this blog (upgrading to Ghost 0.10, resizing some AWS resources based on actual usage), and I realized I should really say something here.

This new blog, started in earnest at the beginning of the year, has become stale again. This is

]]>
https://deitte.com/new-blog-old-again/5f851af950c134689ea03e19Sat, 03 Sep 2016 00:00:00 GMT

I was handling some overdue, routine maintenance on this blog (upgrading to Ghost 0.10, resizing some AWS resources based on actual usage), and I realized I should really say something here.

This new blog, started in earnest at the beginning of the year, has become stale again. This is for good reason, as I've started a new job at Maxwell Health. I won't pretend that it will be updated much anytime soon- my free time is going into learning about ANSI 834, event sourcing, and a million other things. Perhaps details for this blog next year!

]]>
<![CDATA[Scheduled EBS snapshots]]>

As part of my Ghost setup on AWS, I wanted to make sure I had backups of EBS in case things go wrong (as they always do, eventually). I couldn't find many guides for this, surprisingly. I guess most people use the AWS UI for this, or haven't published their

]]>
https://deitte.com/scheduled-ebs-sna/5f8519d350c134689ea03e0aSun, 31 Jan 2016 00:00:00 GMTScheduled EBS snapshots

As part of my Ghost setup on AWS, I wanted to make sure I had backups of EBS in case things go wrong (as they always do, eventually). I couldn't find many guides for this, surprisingly. I guess most people use the AWS UI for this, or haven't published their solutions, or trust EBS and what they change on it more than they should.

I did find something pretty close to what I wanted, a blog post on How to Schedule Daily Rolling EBS Snapshots. I expanded on this and made it a github repo for everyone to use. Thanks to the original author for the help here, and I hope others are helped out by my additions here.

]]>
<![CDATA[Highly-available S3]]>

It's possible to raise the availability of S3 through a multi-region setup. Below I'll explain some of the work we've done at Brightcove on this, including a new Node.js library.

Nearly-highly-available S3

S3 is a fantastic service, and it has the reputation of being a very reliable object store.

]]>
https://deitte.com/highly-available-s3/5f85165c50c134689ea03dbbSun, 24 Jan 2016 00:00:00 GMTHighly-available S3

It's possible to raise the availability of S3 through a multi-region setup. Below I'll explain some of the work we've done at Brightcove on this, including a new Node.js library.

Nearly-highly-available S3

S3 is a fantastic service, and it has the reputation of being a very reliable object store. Not only is the service well-regarded, Amazon touts a 99.999999999% durability of objects over a given year.

So why is anything more needed here for reliability?

While the stated durability is incredibly high, the claims on availability are fairly standard. It is available somewhere between 99.99% a year (from a details page) to 99.9% a month (from the SLA). At the lower end, that's 44 minutes of allowed downtime in a month.

On the project I'm working on, the player services for the Brightcove player, we have a lot of API calls to make. Each API that has a 99.9% (or even 99.99% availability) makes it much harder for us to make our own SLA requirements. Raising the availability rating on S3 ourselves makes life simpler.

And no matter S3's reputation, all services can fail. This was shown with S3 in the August outage in us-east. We had downtime during this issue, as did many others.

Try, try again

The main way we're making sure that S3 is available is through the usage of a second region with replicated S3 data. But there's a simpler step to understand first. If there are intermittent issues from one location, the retry mechanism built into the AWS SDK will help. By default, each S3 call will be retried up to 3 times using exponential backoff. If you want to try even more times, it's simple enough to do so:

  awsConfig.maxRetries = maxAttempts;

The retries happen on 5xx requests. We've also seen some 400 requests, socket timeouts, that work on a retry. If you want to have more retry attempts, or simply capture the information on retries, you can do this with the retry event:

  awsRequest.on('retry', function (response) {
    // add an \"if\" statement here for the
    // conditions you want to retry on
    response.error.retryable = true;
  });

Multi-region S3

The best way to increase the availablity of S3, as I mentioned above, is to use a second region.

The heavy lifting for using this feature is all done for you by Amazon with cross-region replication. By enabling cross-region replication in both directions between two buckets, you can always write to one of the buckets and have the other one as the backup.

Before going forward with this plan, make sure to spend some time in the AWS cost calculator. You will have twice the storage cost, twice the API calls, and an increase in intra-region data transfer. Take this in account with the extra work to be done below and, most importantly, the extra ongoing complexity of adding this in. You may find it's not worth it for your case, or that there's lower-hanging availability scares to tackle first.

Setting up cross-region replication

See Amazon's guide for setup of replication. A few additional tips on setup:

  • Don't forget to turn on versioning for both buckets
  • Once you have gone through the replication steps in Amazon's guide, remember to go back to setting up the second bucket for replication as well
  • If you are starting with one bucket that already has data, make sure to use the AWS SDK for an initial copy of files from one bucket to another.

Using a failover bucket

With a second bucket set up and cross-replication enable, it's time to start using your new setup. There's two main ways you can use your new failover capabilities:

  1. Switching to the failover bucket for public users of S3 data
  2. Switching to the failover bucket for API calls

For the case of public viewing, you will need something that is contacted before S3 to have this work properly. Hopefully you are already using a CDN that is contacted before S3, in which case this work should be fairly simple. Many CDNs have a way to switch over to a failover origin as needed. In the case of CloudFront, I believe this needs to be done using Route 53 instead, but you'll need to read up on this yourself.

For the case of API usage, you will want a way to automatically switch API calls to the failover bucket as needed. We created an open source Node.js library just for this case, s3-s3. Even if you aren't using Node.js, looking at the library should give you some ideas of how this could be done.

Testing failure

So you've set up your failover bucket and ensured its usage within your application. Now what? Sure you could deploy and call it done, but it's nice to know that things really work right before a disaster shows they don't.

The AWS SDK does provide a way to route S3 calls differently so that you can put them through a proxy that will simulate failure. You can do this by setting the endpoint property in the AWS config:

    endpoint = new aws.Endpoint(s3PrimaryProxy);
    awsConfigPrimary.endpoint = endpoint;
    awsConfigPrimary.s3BucketEndpoint = true;

Set s3PrimaryProxy to the full URL of the proxy of your choice. I've seen good work done with toxy.

Monitoring failure

If you've written and tested things well, you'll never even know things have gone badly. Well, unless you read any tech news anywhere, as they will all report on an S3 apocalypse. But it's nice to find out yourself.

The s3-s3 library has a failure event that you can use to send on metrics or alerts on a failure. I have seen some of these go off already, interestingly enough. The best guess is that these are just network blips, and it's a fairly tiny fraction of calls.

Another way to find out about failures is to look at the S3 bucket logs. The logs can take awhile to fill in, so it won't be an instant notification, but it will tell you eventually, likely within an hour. To read the logs, I've liked how Sumo Logic shows things, although there's many other tools. You may need multiple tools here to get exactly what you need.

Moving on

And finally, one less failure to worry about!

]]>
<![CDATA[Old posts, new again]]>

None of the posts below, from the old version of this site, are likely too interesting to you. They are mostly about ancient history or dead technologies. But I loved looking through them and remembering what I wrote, so I share them anyways, like I would pictures of my kids.

]]>
https://deitte.com/old-posts-new-again/5f8518de50c134689ea03df9Mon, 18 Jan 2016 00:00:00 GMTOld posts, new again

None of the posts below, from the old version of this site, are likely too interesting to you. They are mostly about ancient history or dead technologies. But I loved looking through them and remembering what I wrote, so I share them anyways, like I would pictures of my kids.

The one helpful thing I can share here is from my old "About" page: Are you wondering how you pronounce Deitte? Say "be it", but with a "d" instead of a "b".

]]>
<![CDATA[Ghost, nginx, and Node 4.x on AWS]]>

When I decided to switch my blog to use Ghost on AWS, I assumed I would find a complete guide for the setup. One may exist, but I didn't find it. There is a plethora of information scattered around (including wonderful docs at http://ghost.org), but I didn't find

]]>
https://deitte.com/ghost-on-aws/5f85151a50c134689ea03dadWed, 21 Oct 2015 00:00:00 GMTGhost, nginx, and Node 4.x on AWS

When I decided to switch my blog to use Ghost on AWS, I assumed I would find a complete guide for the setup. One may exist, but I didn't find it. There is a plethora of information scattered around (including wonderful docs at http://ghost.org), but I didn't find exactly what I was looking for. So I wrote up what I was doing for others to use. You can see it on Github.

]]>
<![CDATA[Welcome! Again!]]>This is the 2015 reboot of the 2005 deitte.com blog.

I don't expect to be a regular writer here, but I wanted to have a more modern blog for any occasional writing I do.

I also didn't want to discard all of my old writings or contribute to link

]]>
https://deitte.com/welcome-again/5f80c553426bb51a40d44adfWed, 21 Oct 2015 00:00:00 GMT

This is the 2015 reboot of the 2005 deitte.com blog.

I don't expect to be a regular writer here, but I wanted to have a more modern blog for any occasional writing I do.

I also didn't want to discard all of my old writings or contribute to link rot, so I've managed to keep everything around in the archives.

Those posts are like finding some photos in an old box for me. It's fun to look at a few favorites: the welcome post, way too many posts on IFrames,  Aftermixing, VAST, and so many others.

So welcome, again!

]]>