On Open Source and Volunteering

I saw a recent post on LinkedIn from Alex Henthorn-Iwane that gave me pause. He was talking about how nearly 2/3rds of Github projects are maintained by one or two people. He also quoted some statistics around how projects are maintained by volunteers and unpaid members as opposed to more institutional support from people getting paid to do the work. It made me reflect on my own volunteering journey and how the parallels between open source and other organizations aren’t so different after all.

A Hour A Week

Most of my readers know that one of my passion projects outside of Tech Field Day and this humble blog is the involvement of my children in Scouting. I spend a lot of my free time volunteering as a leader and organizer for various groups. I get to touch grass quite often. At least I do when I’m not stuck in meetings or approving paperwork.

One of the things that struck me in Alex’s post was how he talked about the lack of incoming talent to help with projects as older maintainers are aging out. We face a similar problem in scouting. Rather than our volunteers getting too old to do the work we face the issue of the kids aging out. When the kids leave the program through hitting age limits or through growing bored with the program their parents usually go with them. Since those parents are the source of our volunteers we quickly have gaps where our most promising leaders are gone after only a couple of years. Only the most dedicated volunteers stick around after their kids have moved on.

Recruiting people to be a part of the fun, whether a project or an organization, is hard. People have even less time now than they did a few years ago. It could be social media or binge watching TV or doing the work of an extra person or two but finding help is almost impossible. One of the ways that we’ve tried to bridge that gap is to make sure that people that want to help aren’t overwhelmed. We give them little jobs to do to help get them into the flow of things before asking them to do more. That would translate well to open source projects. Give people small tasks or little modules to work on instead of throwing them in other the deep end of the pool with no warning. That’s a quick way to alienate your volunteers. It also keeps them from burning themselves out quickly.

We ease them in by saying “it’s only an hour a week”. Realistically it’s more like two or three hours per week to start. However, if you try to burden people with too much all at once they will run away and never look back. Even if the developers are overwhelmed and need the help they need to understand that shifting the load to other volunteers isn’t a sudden thing. It takes time to slowly move over tasks and evaluate how people are doing before letting them shoulder more of the load.

My Way or the Highway

The other volunteer issue that I run into is the people who are entrenched in what they do. This applies greatly to the people that are the die-hard maintainers of a project. They have their way of doing things and that’s how it’s going to be. Just take a stroll through any Linux kernel mailing list thread and see how those tried-and-true things are encouraged, or in some cases enforced.

I’m all for having structure and a measured approach to how things are done. Where it causes problems for people is when that structure takes precedence over common sense. In my volunteer work I’ve seen a number of old timers who tell me that “this is the way it’s done” or “my way works” when it clearly doesn’t or can lead to other problems. Worse yet, when challenged those people tend to clam up and decide that anyone that disagrees with them should just leave or get with the program. It leads to hard feelings and zero desire to want to help out in the future. The well is poisoned not only for that person but for anyone that knows about the story of how they were rejected or marginalized.

People that are shouldering the load want help. Even if they’re so set in their ways that they can’t conceive of a different way to do it we still need to offer our help. What we need to realize on our side is that their way has worked for them for all this time. We don’t need to come crashing through the front door and trying to upset everything they’ve worked hard to accomplish. Instead, we need to ask questions that help us understand the process and make suggestions where appropriate instead of demands that must be met. My Way or the Highway doesn’t work in either direction. Compromise is the key to accomplishing our mutual goals.


Tom’s Take

Writing an open source library isn’t like taking a group camping in the woods. However, the process isn’t totally foreign. A group of dedicated people are doing something that is thankless but could end up changing lives. We’re always overworked and we want people to help. We just need them to understand why we do things the way we do them. And if that means pushing back it’s up to us to make sure we don’t scare anyone off that is genuinely interested in helping out. All volunteer work lives and dies based on who is helping us accomplish the end goal. Don’t get hung up on the details when evaluating those that choose to give of their time for you.

Copilot Not Autopilot

I’ve noticed a trend recently with a lot of AI-related features being added to software. They’re being branded as “copilot” solutions. Yes, Microsoft Copilot was the first to use the name and the rest are just trying to jump in on the brand recognition, much like using “GPT” last year. The word “copilot” is so generic that it’s unlikely to be to be trademarked without adding more, like the company name or some other unique term. That made me wonder if the goal of using that term was simply to cash in on brand recognition or if there was more to it.

No Hands

Did you know that an airplane can land entirely unassisted? It’s true. It’s a feature commonly called Auto Land and it does exactly what it says. It uses the airports Instrument Landing System (ILS) to land automatically. Pilots rarely use it because of a variety of factors, including the need for minute last-minute adjustments during a very stressful part of the flight as well as the equipment requirements, such as a fairly modern ILS system. That doesn’t even mention that use of Auto Land snarls airport traffic because of the need to hold other planes outside ILS range to ensure only one plane can use it.

The whole thing reminds me of when autopilot is used on most flights. Pilots usually take the controls during takeoff and landing, which are the two more critical phases of flight. For the rest, autopilot is used a lot of the time. That’s the boring sections where you’re just flying a straight line between waypoints on your flight plan. That’s something that automated controls excel at doing. Pilots can monitor but don’t need to have their full attention on the readings every second of the flight.

Pilots will tell you that taking the controls for the approach and landing is just smart for many reasons, chief among them that it’s something they’re trained to do. More importantly, it places the overall control of the landing in the hands of someone that can think creatively and isn’t just relying on a script and some instrument readings to land. Yes, that is what ILS was designed to do but someone should always be there to ensure that what’s been sent is what should be followed.

Pilot to Copilot

As you can guess, the parallels in this process for using AI in your organization are a good match. AI may have great suggestions and may even come up with more novel ways of making you more productive but it’s not the only solution to your problems. I think the copilot metaphor is perfectly illustrated with the rush to have GPT chatbots write reports and articles last year.

People don’t like writing. At least, that’s the feeling that I got when I saw how many people were feeding prompts to OpenAI and having it do the heavy lifting. Not every output was good. Some of it was pretty terrible. Some of it was riddled with errors. And even the things that looked great still had that aura of something like the uncanny valley of writing. Almost right but somehow wrong.

Part of the reason for that was the way that people just assumed that the AI output was better than anything they could have come up with and did no further editing to the copy. I barely trust my own skills to publish something with minimal editing. Why would a trust a know-it-all computer algorithm? Especially with something that has technical content? Blindly accepting an LLM’s attempt at content creation is just as crazy as assuming that there’s no need to doublecheck math calculations if the result is outside of your expectations.

Copilot works for this analogy because copilots are there to help and to be a check against error. The old adage of “trust by verify” is absolutely the way they operate. No pilot would assume they were infallible and no copilot would assume everything the pilot said was right. Human intervention is still necessary in order to make sure that the output matches the desired result. The biggest difference today is that when it comes to AI art generation or content creation a failure to produce a desired result means wasted time. In a situation with an autopilot on an airline making a mistake in landing the results are more horrific.

People want to embrace AI to take away the drudgery of their jobs. It’s remarkably similar to how automation was going to take away our jobs before we realized it was really going to take away the boring, repetitive parts of what we do. Branding AI as “autopilot” will have negative consequences for adoption because people don’t like the idea of a computer or an algorithm doing everything for them. However, copilots are helpful and can take care of boring or menial tasks leaving you free to concentrate on the critical parts of your job. It’s not going to replace us as much as help us.


Tom’s Take

Terminology matters. Autopilot is cold and restrictive. Having a copilot sounds like an adventure. Companies are wise not to encourage the assumption that AI is going to take over jobs and eliminate workers. The key is that people should see the solution as offering a way to offload tasks and ask for help when needed. It’s a better outcome for the people doing the job as well as the algorithms that are learning along the way.

User Discomfort As A Security Function

If you grew up in the 80s watching movies like me, you’ll remember Wargames. I could spend hours lauding this movie but for the purpose of this post I want to call out the sequence at the beginning when the two airmen are trying to operate the nuclear missile launch computer. It requires the use of two keys, one each in the possession of one of the airmen. They must be inserted into two different locks located more than ten feet from each other. The reason is that launching the missile requires two people to agree to do something at the same time. The two key scene appears in a number of movies as a way to show that so much power needs to have controls.

However, one thing I wanted to talk about in this post is the notion that those controls need to be visible to be effective. The two key solution is pretty visible. You carry a key with you but you can also see the locks that are situated apart from each other. There is a bit of challenge in getting the keys into the locks and turning them simultaneously. That not only shows that the process has controls but also ensures the people doing the turning understand what they’re about to do.

Consider a facility that is so secure that you must leave your devices in a locker or secured container before entering. I’ve been in a couple before and it’s a weird feeling to be disconnected from the world for a while. Could the facility do something to ensure that the device didn’t work inside? Sure they could. Technology has progressed to the point where we can do just about anything. But leaving the device behind is as much about informing the user that they aren’t supposed to be sharing things as it is about controlling the device. Controlling a device is easy. Controlling a person isn’t. Sometimes you have to be visible.

Discomfort Design

Security solutions that force the user out of a place of comfort are important. Whether it’s a SCIF for sharing sensitive data or forcing someone to log in with a more secure method the purpose of the method is about attention. You need the user to know they’re doing something important and understand the risks. If the user doesn’t know they’re doing something that could cause problems or expose something crucial you will end up doing damage control at some point.

Think of something as simple as sitting in the exit row on an airplane. In my case, it’s for Southwest Airlines. There’s more leg room but there’s also a responsibility to open the door and assist in evacuation if needed. That’s why the flight attendants need to hear you acknowledge that warning with a verbal “yes” before you’re allowed to sit in those seats. You have admitted you understand the risks and responsibilities of sitting there and you’re ready to do the job if needed.

Security has tried to become unobtrusive in recent years to reduce user friction. I’m all about features like using SSL/TLS by default in websites or easing restrictions on account sharing or even using passkeys in place of passwords. But there also comes a point when encapsulating the security reduces its effectiveness. What about fishing emails that put lock emojis next to URLs to make they seem secure even when they aren’t? How about cleverly crafted login screens for services that are almost indistinguishable from the real thing unless you bother to check the URL? It could even be the tried-and-true cloned account on Facebook or Instagram asking a friend for help unlocking their account only to steal your login info and start scamming everyone on your friends list.

The solution is to make users know they’re secure. Make it uncomfortable for them so they are acutely aware of heightened security. We deal with it all the time in other areas of our lives outside of IT. Airport screenings are a great example. So are heightened security measures at federal buildings. You know you’re going somewhere that has placed an emphasis on security.

Why do we try to hide it in IT? Is it because IT causes stress due to it being advanced technology? Are we worried that users are going to drop our service if it is too cumbersome to use the security controls? Or do we think that the investment in making that security front and center isn’t worth the risk of debugging it when it goes wrong? I would argue that these are solved problems in other areas of the world and we have just accepted them over time. IT shouldn’t be any different.

Note that discomfort shouldn’t lead to a complete lack of usability. It’s very easy to engineer a system that needs you to reconfirm your credentials every 10 minutes to ensure that no one has hacked you. And you’d quit using it because you don’t want to type in a password that often. You have to strike the right balance between user friendly and user friction. You want them to notice they’re doing something that needs their attention to security but not so much that they’re unable to do their job or use the service. That’s where the attention should be placed, not in cleverly hiding a biometric scanning solution or certificate-based service for the sake of saying it’s secure.


Tom’s Take

I’ll admit that I tend to take things for granted. I had to deal with a cloned Facebook profile this past weekend and I worried that someone might try to log in and do something with my account. Then I remembered that I have two-factor authentication turned on and my devices are trusted so no one can impersonate me. But that made me wonder if the “trust this device” setting was a bit too easy to trust. I think making sure that your users know they’re protected is more critical. Even if it means they have to do something more performative from time to time. They may gripe about changing a password every 30 days or having to pull out a security token but I promise you that discomfort will go away when it saves them from a very bad security day.

Human Generated Questions About AI Assistants

I’ve taken a number of briefings in the last few months that all mention how companies are starting to get into AI by building an AI virtual assistant. In theory this is the easiest entry point into the technology. Your network already has a ton of information about usage patterns and trouble spots. Network operations and engineering teams have learned over the years to read that information and provide analysis and feedback.

If marketing is to be believed, no one in the modern world has time to learn how to read all that data. Instead, AI provides a natural language way to ask simple questions and have the system provide the data back to you with proper context. It will highlight areas of concern and help you grasp what’s going on. Only you don’t need to get a CCNA to get there. Or, more likely, it’s more useful for someone on the executive team to ask questions and get answers without the need to talk to the network team.

I have some questions that I always like to ask when companies start telling me about their new AI assistant that help me understand how it’s being built.

Question 1: Laying Out LLMs

My first question is always:

Which LLM are you using to power your system?

The reason is because there are only two real options. You’re either paying someone else to do it as a service, like OpenAI, or you’re pulling down your own large language model (LLM) and building your own system. Both have advantages and disadvantages.

The advantage of a service-based offering is that you don’t need to program anything. You just feed the data to the LLM and it takes off. No tuning needed. It’s fast and universally available.

The downside of a service based model is the fact that it costs money. And if you’re using it commercially it’s going to cost more than a simple monthly fee. The more you use it, the more expensive it gets. If your vendor is pulling thousands of daily requests from the LLM is that factored into the fee they’re charging you? What happens when the OpenAI prices go up?

The advantages of building your own system are that you have complete control over the way the data is being processed. You tune the LLM and you own the way it’s being used. No need to pay more to someone else to do all the work for you. You can also decide how and when features are implemented. If you’re updating the LLM on your schedule you can include new features when they’re ready and not when OpenAI pushes them live and makes them available for everyone.

The disadvantages of building your own system involves maintenance. You have to update and patch it. You have to figure out what features to develop. You have to put in the work. And if the model you use goes out of support or is no longer being maintained you have to swap to something new and hope that all your functions are going to work with the new one.

Question 2: Data Sources

My second question:

Where does the LLM data come from?

May seem simple at first, right? You’re training your LLM on your data so it gives you answers based on your environment. You’d want that to be the case so it’s more likely to tell you things about your network. But that insight doesn’t come out of thin air. If you want to feed your data to the LLM to get answers you’re going to have to wait while it studies the network and comes up with conclusions.

I often ask companies if they’re populating the system with anonymized data from other companies to provide baselines. I’ve seen this before from companies like Nyansa, which was bought by VMware, and Raza Networks, while is part of HPE Aruba. Both of those companies, which came out long before the current AI craze, collected data from customers and used it to build baselines for everyone. If you wanted to see how you compared to other high education or medical verticals the system could tell you what those types of environments looked like, with the names obscured of course.

Pre-populating the LLM with information from other companies is great if your stakeholders want to know how they fare against other companies. But it also runs the risk of populating data that shouldn’t be in the system. That could create situations where you’re acting on bad information or chasing phantoms in the organization. Worse yet, your own data could be used in ways you didn’t intend to feed other organizations. Even with the names obscured someone might be able to engineer a way to obtain knowledge about your environment you don’t want everyone to have.

Question 3: Are You Seeing That?

My third question:

How do you handle hallucinations?

Hallucination is the term for when the AI comes up with an answer that is false. That’s right, the super intelligent system just made up an answer instead of saying “I don’t know”. Which is great if you’re trying to convince someone you’re smart or useful. But if the entire reason why I’m using your service is accurate answers about my problems I’d rather have you say you don’t have an answer or you need to do research instead of giving me bad data that I use to make bad decisions.

If a company tells me they don’t really see hallucinations then I immediately get concerned, especially if they’re leveraging OpenAI for their LLM. I’ve talked before about how ChatGPT has a really bad habit of making up answers so it always looks like it knows everything. That’s great if you’re trying to get the system to write a term paper for you. It’s really bad if you try to reroute traffic in your network around a non-existent problem. I know there are many systems out there that can help reduce hallucinations, such as retrieval augmented generation (RAG), but I need that to be addressed up front instead of a simple “we don’t see hallucinations” because that makes me feel like something is being hidden or glossed over.


Tom’s Take

These aren’t the only questions you should be asking about AI and LLMs in your network but they’re not a bad start. They encompass the first big issues that people are likely to run into when evaluating an AI system. How do you do your analysis? What is happening with my data? What happens when the system doesn’t know what to do? Sure, there’s always going to be questions about cost and lock-in but I’d rather know the technology is sound before I ever try to deploy the system. You can always negotiate cost. You can’t negotiate with a flaw AI.

Repetition Without Repetition

I just finished spending a wonderful week at Cisco Live EMEA and getting to catch up with some of the best people in the industry. I got to chat with trainers like Orhan Ergun and David Bombal and see how they’re continuing to embrace the need for people in the networking community to gain knowledge and training. It also made me think about a concept I recently heard about that turns out to be a perfect analogy to my training philosophy even though it’s almost 70 years old.

Practice Makes Perfect

Repetition without repetition. The idea seems like a tautology at first. How can I repeat something without repeating it. I’m sure that the people in 1967 that picked up the book by Soviet neurophysiologist Nikolai Aleksandrovitsch Bernstein were just as confused. Why should you do things over and over again if not to get good at performing the task or learning the skill?

The key in this research from Bernstein lay in how the practice happens. In this particular case he looked at blacksmiths to see how they used hammers to strike the pieces they were working on. The most accurate of his test subjects didn’t just perform the same movements over and over again. Instead, they had some variability in their skill that allowed them to be more accurate or efficient over time. They weren’t just going through the motions, as it were. They were adapting their motions to the need at the moment. This allowed them to adjust their aim if the piece had moved or needed a lighter touch in an area that was thinned too quickly.

Bernstein said this about the way that the blacksmiths practiced their art:

“The process of practice towards the achievement of new motor habits essentially consists in the gradual success of a search for optimal motor solutions to the appropriate problems. Because of this, practice, when properly undertaken, does not consist in repeating the means of solution of a motor problem time after time, but in the process of solving this problem again and again by techniques which we changed and perfected from repetition to repetition. It is already apparent here that, in many cases, ‘practice is a particular type of repetition without repetition’…”

The quote above illustrates a big shift in thinking for people who play sports or perform some kind of task. Instead of merely repeating the movements over and over again until perfection (the ‘means of the solution’) you should instead focus on solving the problem over and over again and adapting your skill to that end. It sounds silly and somewhat pedantic, but the key is in the shift of thinking. For basketball players, it’s not about perfecting your spin move to get around a defender. It’s about understanding the need to get around the defender and how best to accomplish that for different kinds of people defending you.

Avoiding Autopilot

Most of the content you’ll see around the concept of repetition without repetition is for sports players practicing skills. However, I think the concept extends perfectly to the IT certification space and troubleshooting skillset as well. There are a number of important things that we need to learn in order to do our jobs or earn a specialization but we need to remember that the goal is to solve problems and show mastery, not to memorize commands and perform them like a simple batch file.

Here’s a perfect example that I’m very guilty of doing. When you log into a Cisco router to do something, what do you normally do first when you get to the CLI prompt? You almost always need to be in privileged EXEC mode, right? That’s the enable command. When we want to configure something on the router we usually have to be in the router configuration mode, which is the configure terminal command. So far, so good, right? Most of you have already picked up on the fact that you can shorten those commands to save time typing out the whole name, which is an important skill to have when you’re configuring a series of devices or trying to do it in a short timeframe. So enable, configure terminal instead becomes en, conf t. It’s like muscle memory at this point.

How many times have you logged into router to check the routing table and accidentally typed in en, conf t from muscle memory only to remember that the routing table has to be displayed from EXEC mode, not configuration mode? You chide yourself for typing in conf t and back out to look at the table. But what you’ve really done is shown the power and drawbacks of repetition. If you spend hours upon hours typing in the same commands over and over again you will type them in the same way every time. So much so, in fact, that you forget that you’re doing it until you realize you put something in that you shouldn’t. You knew when you logged in that you wanted to display the routing table. You knew that was available in EXEC mode. And yet your brain and fingers automatically typed the same commends you always type when you log into the router.

The idea of repetition without repetition says that we need to consider the how of solving a problem and the skills needed above and beyond the simple skills themselves. Sure, there may only be one or two commands the achieve a desired output or effect but we should know how the both impact the performance of the device or how they can impact the outcome of a situation. This is especially important for exams that like to restrict your ability to use specific commands or are written to direct you in a specific line of thinking. Anyone who has ever taken the CCIE lab exam knows how this works. They restrict you from using common commands or give you a question with two possible answers only to limit that to one with a later requirement. The test asks you to configure something in an earlier section and then gives you a task that can undo that configuration if you’re not aware of how it interacts with everything else. If you’ve ever created a routing redistribution loop on accident you know what that feels like.

The Indictment of AI

In a way, repetition without repetition is the key of what makes a person an apt problem solver. By approaching problems with a mindset and not just a skillset you open your world to new possibilities and considerations. You know there is more than one way to skin a cat, as the old saying goes. You’re smarter than an artificial intelligence, which only works within a set of bounds with skills and apparent intelligence that repeats what it’s told or uses a very narrow focus every time to provide consistent results.

Computer programs and algorithms are dumb because they will solve the problem the same way each time they are executed. People will solve the problem and then start analyzing the results to find new, better, and faster ways to implement solutions. That’s the heart of learning. It’s not just performing the subtasks of the skill to perfection every time. It’s about learning how to implement them in a better way each time and arrive at better solutions to problems with the variables are changed. It’s why the human mind that has been adapted for centuries and millennia to look for patterns can be tricked into adapting those patterns to new concepts and made to “grow up” by learning over time to adjust to new inputs or fresh data. That, more than anything, is why repetition without repetition makes us better than the AI we’re programming to eclipse us.


Tom’s Take

When I first heard of this concept I thought it was some new idea from sports science that was borne from modern research techniques. I was shocked to learn it was discovered before I was born and has roots in some of the oldest trades we can think of. What it proves is that the human mind and body are very wonderful things that react perfectly when challenged in the right way. The brain will adapt and overcome when presented with new inputs. The way we grow and improve ourselves is not wrote memorization or continuous skill repetition. Instead, if we internalize the importance of the outcome over the means of getting there we will find ourselves smarter and more able to be flexible when new challenges come our way.

A Handy Acronym for Troubleshooting

While I may be getting further from my days of being an active IT troubleshooter it doesn’t mean that I can’t keep refining my technique. As I spend time looking back on my formative years of doing troubleshooting either from a desktop perspective or from a larger enterprise role I find that there were always a few things that were critical to understand about the issues I was facing.

Sadly, getting that information out of people in the middle of a crisis wasn’t always super easy. I often ran into people that were very hard to communicate with during an outage or a big problem. Sometimes they were complicit because they made the mistake that caused it. They also bristled at the idea of someone else coming to fix something they couldn’t or wouldn’t. Just as often I ran into people that loved to give me lots of information that wasn’t relevant to the issue. Whether they were nervous talkers or just had a bad grasp on the situation it resulted in me having to sift through all that data to tease out the information I needed.

The Method

Today, as I look back on my career I would like to posit an idea of collecting the information that you need in order to effectively troubleshoot an issue.

  • Scope: How big is this problem? Is it just a single system or is it an entire building? Is it every site? If you don’t know how widespread the problem is you can’t really begin to figure out how to fix it. You need to properly understand the scope. That also includes understanding what the scope of the system for the business is. Taking down a reservation system for an airline is a bigger deal that guest Wi-Fi being down at a restaurant.
  • Timeline: When did this start happening? What occurred right before? Were there any issues that you think might have contributed here. It’s important to make the people you’re working with understand that a proper timeline is critical because it allows you to eliminate issues. You don’t want to spend hours trying to find the root cause in one system only to learn it wasn’t even powered on at the time and the real cause is in a switch that was just plugged in.
  • Frequency: Is this the first time this has happened? Does it happen randomly or seemingly on a schedule? This one helps you figure out if it’s systemic and regular or just cosmic rays. It also forces your team or customers to think about when it’s occurring and how far back the issue goes. If you come in thinking it’s a one-off that happened yesterday only to find out it’s actually been happening for weeks or months you’ll take a much different approach.
  • Urgency: Is this an emergency? Are we talking about a hospital ER being down or a typo in a documentation page? Do I need to roll out to spend the whole night fixing this or is it something that I can look at on a scheduled visit. Be sure to note the reasoning behind why they choose to make it a priority too. Some customers love to make everything a dire emergency just to ensure they get someone out right away. At least until it’s time to pay the emergency call rate.

A four step plan that’s easy to remember. Scope, Timeline, Frequency, Urgency. STFU.

Memory Aids

Okay, you can stop giggling now. I did that on purpose. In part to help you remember what the acronym was. In part to help you take a big of a relaxed approach to troubleshooting. In, in some ways, to help you learn to get those chatterboxes and pushy stakeholders off your back. If your methodology includes STFU they might figure out quickly that you need to be the one doing the talking and they need to be the one giving the answers, not the other way around.

And yes, each of these little steps would have saved me so much time in my old role. For example:

  • Scope – Was the whole network down? Or did one of the kids just unplug your Ethernet cable?
  • Frequency – Has this server seriously been beeping every 30 seconds for the last two years? Did you bother to look at the error message?
  • Timeline – Yes, I would assume that when you put that lab switch into your network was when the problem with VTP started.
  • Urgency – Do you really need me to drive three hours to press the F1 key on a keyboard?

I seriously have dozens of examples but these are four of the stories I tell all of the time to show just how some basic understanding can help people do more than they think.


Tom’s Take

People love mnemonic devices to remember things. Whether it’s My Very Eager Mother Just Served Us Nine (Pizzas) to remember the 8 planets and that one weird one or All People Seem To Need Data Processing to remember the seven layers of the OSI Model. I remember thinking through the important need-to-know information for doing some basic initial troubleshooting and how easily it fit into an acronym that could be handy for other things too when you’re in a stressful situation. Feel free to use it.

Painless Progress with My Ubiquiti Upgrade

I’m not a wireless engineer by trade. I don’t have a lab of access points that I’m using to test the latest and greatest solutions. I leave that to my friends. I fall more in the camp of having a working wireless network that meets my needs and keeps my family from yelling at me when the network is down.

Ubiquitous Usage

For the last five years my house has been running on Ubiquiti gear. You may recall I did a review back in 2018 after having it up and running for a few months. Since then I’ve had no issues. In fact, the only problem I had was not with the gear but with the machine I installed the controller software on. Turns out hard disk drives do eventually go bad and I needed to replace it and get everything up and running again. Which was my intention when it went down sometime in 2021. Of course, life being what it is I deprioritized the recovery of the system. I realized after more than a year that my wireless network hadn’t hiccuped once. Sure, I couldn’t make any changes to it but the joy of having a stable environment is that you don’t need to make constant changes. Still, I was impressed that I had no issues the necessitated my recovery of my controller software.

Flash forward to late 2023. I’m talking with some of the folks at Ubiquiti about a totally unrelated matter and I just happened to mention that I was impressed at how long the system had been running. They asked me what hardware I was working with and when I told them they laughed and said I needed to check out their new stuff. I was just about to ask them what I should look at when they told me they were going to ship me a package to install and try out.

Dreaming of Ease

Tom Hildebrand really did a great job because I got a UPS shipment at the beginning of December with a Ubiquiti Dream Machine SE, a new U6 Pro AP, and a G5 Flex Camera. As soon as I had the chance I unboxed the UDM SE and started looking over the installation process. The UDM SE is an all-in-one switch, firewall, and controller for the APs. I booted the system and started to do the setup process. I panicked for a moment because I realized that my computer was currently doing something connected to my old network and I didn’t want to have to dig through the pile to find a laptop to connect in via Ethernet to configure things.

That’s when my first surprise popped up. The UDM SE allows you to download the UniFi app to your phone and do the setup process from a mobile device. I was able to configure the UDM SE with my network settings and network names and get it staged and ready to do without the need for a laptop. That was a big win in my book. Lugging your laptop to a remote site for an installation isn’t always feasible. And counting on someone to have the right software isn’t either. How many times have you asked a junior admin or remote IT person what terminal program they’re using only to be met with a blank stare?

Once the UDM SE was up and running, getting the new U6 AP joined was easy. It joined the controller, downloaded the firmware updates and adopted my new (old) network settings. Since I didn’t have my old controller software handy I just recreated the old network settings from scratch. I took the opportunity to clean out some older compatibility issues that I was ready to be rid of thanks to an old Xbox 360 and some other ancient devices that were long ago retired. Clean implementations for the win. After the U6 was ready to go I installed it in my office and got ready to move my old AP to a different location to provide coverage.

The UDM SE detected that there were two APs that were running but part of a different controller. It asked me if I wanted to take them over and I happily responded in the affirmative. Sadly, when asked for the password to the old controller I drew a blank because that was two years ago and I can barely remember what I eat for breakfast. Ubiquiti has a solution for that and with some judicious use of the reset button I was able to reset the APs and join them to the UDM SE with no issues. Now everything is humming along smoothly. The camera is still waiting to be deployed once I figure out where I want to put it.

How is it all working? Zero complaints so far. Much like my previous deployment everything is humming right along and all my devices joined the new network without complaint. All the APs are running on new firmware and my new settings mean fewer questions about why something isn’t working because the kids are on a different network than the printer or one of the devices can’t download movies or something like that. Given how long I was running the old network without any form of control I’m glad it picked right up and kept going. Scheduling the right downtime at the beginning of the month may have had something to do with that but otherwise I’m trilled to see how it’s going.


Tom’s Take

Now that I’ve been running Ubiquiti for the last five years how would I rate it? I’d say for people that don’t want to rely on consumer APs from a big box store to run your home network you need to check Ubiquiti out. I know my friend Darrel Derosia is doing some amazing enterprise things with it in Memphis but I don’t need to run an entire arena. What I need is seamless connectivity for my devices without worry about what’s going to go down when I walk upstairs. My home network budget precludes enterprise gear. It fits nicely with Ubiquiti’s price point and functionality. Whether I’m trying to track down a lost Nintendo Switch or limit bandwidth so game updates aren’t choking out my cable modem I’m pleased with the performance and flexibility I have so far. I’m still putting the UDM SE through it’s paces and once I get the camera installed and working with it I’ll have more to say but rest assured I’m very thankful for Tom and his team for letting me kick the tires on some awesome hardware.

Disclaimer: The hardware mentioned in this post was provided by Ubiquiti at no charge to me. Ubiquiti did not ask for a review of their equipment and the opinions and perspectives represented in this post are mine and mine alone with not expectation of editorial review or compensation.

Back On Track in 2024

It’s time to look back at my year that was and figure out where this little train jumped off the rails. I’ll be the first to admit that I ran out of steam chugging along toward the end of the year. My writing output was way down for reasons I still can’t quite figure out. Everything has felt like a much bigger task to accomplish throughout the year. To that end, let’s look at what I wanted to do and how it came out:

  • Keeping Track of Things: I did a little bit better with this one, aside from my post schedule. I tried to track things much more and understand deadlines and such. I didn’t always succeed like I wanted to but at least I made the effort.
  • Creating Evergreen Content: This one was probably a miss. I didn’t create nearly as much content this year as I have in years past. What little I did create sometimes felt unfocused and less impactful. Part of that has to do with the overall move away from written content to something more video and audio focused. However, even my other content like Tomversations was significantly reduced this year. I will say that the one episode that I did record that dealt with InfiniBand was actually really good and I think it’s going to have some life in the future.
  • Insuring Intentionality: I tried to be more intentional with things in 2023 and we see how this turned out. I think I need to make sure to put that more at the front of my mind in 2024 as we look at the way that writing and other content creation is being transformed. In fact, the number of times that I’ve had to fight my AI-based autocomplete to make it stop finishing sentences for me reminds me how intentional I need to be in order to get the right things out there that I want to say. And before you say “just turn it off” I want to see how trainable it is to actually do what I want. So maybe part of the intentionality is making it intentional that I’m going to beat this thing.

Looking back at where I was makes me realize that content creation is always going to be a battle and so it making sure I have time for it. That means prioritizing the schedule for 2024, which isn’t going to be easy. Tech Field Day is now a part of the Futurum Group, which means I’m going to need to figure out how my role is going to be different in the coming months. I’m still going to be a part of Field Day but I also know I’m going to need to figure out how to navigate new coworkers and new goals. I have also been named a course director for my council’s Wood Badge course in the fall. That means doing some of the hardest leadership I’ve ever had to do, which I’m sure I’ll be documenting along the way here. As to what I want to specifically work on in 2024, what needs the most help?

  • Reaching Out For Help: Not surprisingly, this is something I have always needed help with (pun intended). I’ve never been one to ask for help with things until it’s almost to the point of disaster. So I need to be better in 2024 about asking for the help I need or think I’m going to need before it gets to be a huge problem. But that also means asking for assistance with things early on to help me get on the right track. Help isn’t always just doing things. It’s about making sure that you have the right ideas before you start down the track. So I’m going to make sure I’m ready to get the guidance and assistance I need when it’s needed and not when it’s an attempt to save the day.
  • Prioritizing Scheduling Intelligently: Part of the struggle in 2023 was making sure I was prioritizing things appropriately. Yes, work things always take priority as they should. But It’s also about other things that are part of my calendar that I need to get a handle on. I’ve done a good job of letting some of them go over the last year so the next phase is taming the ones that are left. Making sure the important meetings have their place and time but also making sure that those meetings have prep time and other pieces in the calendar so they don’t push anything else out of the way. It’s not enough to just block time and hope for the best. It’s about knowing what needs to be done and making it happen the right way at the right time.
  • Staying Consistent with Content: After the rise of GPT assistants and the flood of video content in 2023 I realize that I like writing more and more. Not having something complete my thoughts for me. Not jumping in front of a video camera to do stuff cold. I like to write. As much as I love the weekly Rundown show that we do I love writing the scripts almost as much. My Zen is in the keyboard, not the camera. I’ll still be creating video content but my focus will be in creating more of the writing that I like so much. I’ve already been experimenting with LinkedIn as a platform and I think I’ll be doing some more there too. Maybe not as much as I hope to do here but we will see how that goes.

Tom’s Take

We all have challenges we have to overcome. That’s the nature of life. As the industry has changed and evolved over time the way we communicate our ideas and perspectives to everyone has had to change as well. If you’d have told me ten years ago that Twitter would be a ghost town and Youtube would be everyone’s preferred learning tool I might have laughed. If you’d have told me five years ago I couldn’t have foreseen how things would turn out. The way we make it work is by staying on track and taking the challenges as they come. Switching social media platforms or embracing new content styles is all part of the game. But working with your strengths and making people smile and helping them to be informed is part of what this whole game is all about. 2024 is going to be another year of challenges and opportunities to shine. I hope the make the most of it and stay on track to success.

Production Reductions

You’ve probably noticed that I haven’t been writing quite as much this year as I have in years past. I finally hit the wall that comes for all content creators. A combination of my job and the state of the industry meant that I found myself slipping off my self-appointed weekly posting schedule more and more often in 2023. In fact, there were several times I skipped a whole week to get put something out every other week, especially in the latter half of the year.

I’ve always wanted to keep the content level high around here and give my audience things to think about. As the year wore on I found myself running out of those ideas as portions of the industry slowed down. If other people aren’t getting excited about tech why should I? Sure, I could probably write about Wi-Fi 7 or SD-WAN or any number of topics over and over again but it’s harder to repeat yourself for an audience that takes a more critical eye to your writing than it is for someone that just wants to churn out material.

My Bruce Wayne job kept me busy this year. I’m proud of all the content that we created through Tech Field Day and Gestalt IT, especially things like the weekly Rundown show. Writing a post every week is hard. Writing a snarky news show script is just as taxing. If I can find a way to do that I can find a way to write, right?

Moving Targets

Alas, in order to have a plan for content creation you have to make a plan and then stick to it. I did that last year with my Tomversations pieces and it succeeded. This year? I managed to make one. Granted, it was a good one but it was still only one. Is it because I didn’t plan ahead far enough? Or because I didn’t feel like I had much to say?

Part of the secret behind writing is to jot down your ideas right away , no matter how small they might be. You can develop an idea that has merit. You can’t develop a lack of an idea. I have a note where I add quotes and suggestions and random things that I overhear that give me inspiration. Sometimes those ideas pan out. Other times they don’t. I won’t know either way if I don’t write them down and do something about them. If you don’t create the ground for your ideas to flourish you’ll have nothing to reap when it’s time.

The other thing that causes falloffs in content creation is timing. I always knew that leaving my posts until Friday mornings was going to eventually bite me and this year was the year with teeth. Forcing myself to come up with something in a couple of hours time initially led to some pretty rushed ideas and that later pushed into the following Monday (or beyond). While creating a schedule for my thoughts has helped me stay consistent throughout the years the pressures on my schedule this year have meant letting some things slip when they weren’t critical. Hard to prioritize a personal post over a work video that needs to be edited or a paper that needs to be written first.

One other thing that I feel merits some mention is the idea of using tools to help the creative process. I am personally against using a GPT algorithm to write for me. It just doesn’t sound like me and I feel that having something approximating who I am doesn’t have the same feel. Likewise, one of the other things this year that I’m fighting with is word predictions in writing tools. Not as bad as full-on content creation but merely “suggestions” about what word I want to use next. I’ve disabled them for the most part because, while helpful in certain situations they annoy me more than anything when writing. Seeing a tool suggest a word for me while I’m in the flow of writing a post is like hearing a note a half step out of tune in a piece of music. It’s just jarring enough to take you out of the whole experience. Stop trying to anticipate what I’m going to say and let me say it!

Producing Ahead

Does all this mean I’m giving up on my writing? Not hardly. I still feel like writing is my best form of communication. Even a simple post about complaining about my ability to write this year is going to be wordy. I feel it’s because written words give us more opportunity to work at our own pace. When we watch videos we work at someone else’s idea of a learning pace. If you make a ten-minute video to get across a point that could have been read in three minutes you’re either doing a very good job of explaining everything or you’re padding out your work. I prefer to skim, condense, and study the parts that are important to me. I can’t really do that with a video.

I feel the written form of content is still going to be king for years to come. You can search words. You can rephrase words. You can get a sense for how dense a topic is by word count. There’s value in seeing the entire body of knowledge in front of you before you begin. Besides, the backspace key is a whole lot easier to edit than doing another take and remembering to edit out the bad one in the first place.


Tom’s Take

Writing is practically meditation for me at this point. I can find a topic I’m interested in and write. Empty my brain of thoughts and ideas and let them take shape here. AI can’t approximate that for me. Video has too many other variables to worry about. That’s why I’m a writer. I love the way the process works with just a keyboard, a couple of references, and my brain doing the heavy lifting. I’m not sure what my schedule for posting is going to look like in 2024 and beyond but trust me when I say it’s not going away any time soon.

Routing Through the Forest of Trees

Some friends shared a Reddit post the other day that made me both shake my head and ponder the state of the networking industry. Here is the locked post for your viewing pleasure. It was locked because the comments were going to devolve into a mess eventually. The person making the comment seems to be honest and sincere in their approach to “layer 3 going away”. The post generated a lot of amusement from the networking side of IT about how this person doesn’t understand the basics but I think there’s a deeper issue going on.

Trails To Nowhere

Our visibility of the state of the network below the application interface is very general in today’s world. That’s because things “just work” to borrow an overused phrase. Aside from the occasional troubleshooting exercise to find out why packets destined for Azure or AWS are failing along the way when is the last time you had to get really creative in finding a routing issue in someone else’s equipment? We spend more time now trying to figure out how to make our own networks operate efficiently and less time worrying about what happens to the packets when they leave our organization. Provided, of course, that the users don’t start complaining about latency or service outages.

That means that visibility of the network functions below the interface of the application doesn’t really exist. As pointed out in the post, applications have security infrastructure that communicates with other applications and everything is nicely taken care of. Kind of like ordering packages from your favorite online store. The app places the order with a storefront and things arrive at your house. You don’t have to worry about picking the best shipping method or trying to find a storefront with availability or any of the older ways that we had to deal with weirdness.

That doesn’t mean that the processes that enable that kind of service are going away though. Optimizing transport networks is a skill that is highly specialized but isn’t a solved issue. You’ve probably heard by now that UPS trucks avoid left turns whenever possible to optimize safety and efficiency. The kind of route planning that needs to be done in order to eliminate as many left turns as possible from the route is massive. It’s on the order of a very highly specialized routing protocol. What OSPF and BGP are doing is akin to removing the “left turns” from the network. They find the best path for packets and keep up-to-date as the information changes. That doesn’t mean the network is going away. It means we’re finding the most efficient route through it for a given set of circumstances. If a shipping company decides tomorrow that they can no longer guarantee overnight delivery or even two-day shipping that would change the nature of the applications and services that offer that kind of service drastically. The network still matters.

OSI Has to Die

The other thing that jumped out at me about the post was the title. Referring to Layer 3 of the OSI model as a routing function. The timing was fortuitous because I had just finished reading Robert Graham’s excellent treatise on getting rid of the OSI model and I couldn’t agree more with him. Containing routing and addressing functions to a single layer of an obsolete model gives people the wrong ideas. At the very least is encourages them to form bad opinions about those ideas.

Let’s look at the post as an example. Taking a stance like “we don’t need layer three because applications will connect to each other” is bad. So is “We don’t need layer two because all devices can just broadcast for the destination”. It’s wrong to say those things but if you don’t know why it’s wrong then it doesn’t sound so bad. Why spend time standing up routing protocols if applications can just find their endpoints? Why bother putting higher order addresses on devices when the nature of Ethernet means things can just be found easily with a broadcast or neighbor discovery transmission? Except you know that’s wrong if you understand how remote networks operate and why having a broadcast domain of millions of devices would be chaos.

Graham has some very compelling points about relegating the OSI model to history and teaching how networks really operate. It helps people understand that there are multiple networks that exist at one time to get traffic to where it belongs. While we may see the Internet and Ethernet LAN as a single network they have different purposes. One is for local traffic delivery and the other is for remote traffic delivery. The closest analog for certain generations is the phone system. There was a time when you have local calls and long distance calls that required different dialing instructions. You still have it today but it’s less noticeable thanks to mobile devices not requiring long distance dialing instructions.

It might be more appropriate to think of the local/remote dichotomy like a private branch exchange (PBX) phone network. Phones inside the PBX have locally significant extensions that have no meaning outside of the system. Likewise, remote traffic can only enter the system through entry points created by administrators, like a main dial-in number that terminates on an extension or direct inward dial (DID) numbers that have significance outside the system. Extensions only matter for the local users and have no way to communicate outside without addressing rules. Outside addresses have no way of communicating into the local system without creating rules that allow it to happen. It’s a much better metaphor than the OSI model.


Tom’s Take

I don’t blame our intrepid poster for misunderstanding the way network addresses operate. I blame IT for obfuscating it because it doesn’t matter anymore to application developers. Sure, we’ve finally hit the point where the network has merged into a single entity with almost no distinction from remote WAN and local LAN. But we’ve also created a system where people forget the dependencies of the system at lower levels. You can’t encode signals without a destination and you can’t determine the right destination without knowing where it’s supposed to be. That’s true if you’re running a simple app in an RFC 1918 private space or the public cloud. Forgetting that little detail means you could end up lost in a forest not being able to route yourself out of it again.