I cry at work

The new podcast “This is Uncomfortable” had an episode about crying in the workplace. I don’t have a lot to say about it, other than it’s a good story. But I wanted to go on record as saying that I cry at work.

Sometimes work is overwhelming. Sometimes my personal life leaves me on edge. Sometimes I read a moving article.

Sometimes I cry. It’s okay to feel things.

Do your best

What does it mean to do your best? I was recently talking to a friend about this. She’s a single mother of four and at the time of the conversation was working part time while she completed her bachelors degree. She was upset because she felt like she wasn’t doing the best she could on a particular paper. This bothered her because she’s someone who always tries to do her best.

I said she still was. Just because she could have, in isolation, written a better paper, she’s still doing her best in aggregate. That’s one of the most important things I learned in grad school: how to let some things slide while keeping the overall effort.

Despite what years of motivational talks (and a poem called “Good Enough is Neither” that was drilled into us in ninth grade) have told me, sometimes good enough is good enough. It’s all a balancing act. Part of being an adult is knowing how to strike the balance between all of the things you have and want to do.

For folks with a singular pursuit, perhaps they can focus all of their energy on doing their absolute best in a single thing. For most of us, life doesn’t work that way. Your job isn’t one thing, it’s a collection of things that you do. This is even more true for your family and friends. Sometimes you have to do less than your best at one thing in order to do well enough somewhere else.

I tend to view “doing my best” not as something that happens on a single task, but as a reflection of my effort in the aggregate. I think that’s a healthier approach.

A code analogy for politics

Every once in a while, someone suggests writing laws like code. Bills are pull requests. You can easily see diffs from previous versions. It’s an appealing idea.

But sometimes I think about how political structures resemble code. Specifically, the U.S. Constitution reminds me of most of the code I write: it’s mostly happy path and there’s not a lot of error checking. They both assume good actors all around.

Just as Madison et al did not consider that a presidential candidate might receive material support from a foreign power and that large portions of the Congress might choose to turn a blind eye to it, I don’t really think about how a bad actor might use code I write.

Of course, I mostly write code for my own use. And the Constitution isn’t workable if it covers great detail. But a little more exception handling and testing is probably good for both of us.

Competency degrees and the role of higher education

Several years ago, Purdue University introduced a “competency degree program”. I called it “test out of your degree”. Although the University’s website is short on detail, I gather the general idea is a focus on doing instead of study. Which sounds pretty good on its face, but actually isn’t.

“We’ve hired Purdue grads before,” said Dave Bozell, owner of CG Visions, told the Lafayette Journal & Courier, “and they have the theory, but we still have to spend time teaching them how to apply it to what they’re working on.”

Yes, Dave. That’s the point. Universities do not exist to provide vocational training for your employees. That’s your responsibility. That’s why science majors have to take some (but not enough) humanities courses. Higher education is for broad learning. Or at least it used to be.

I wonder sometimes if the Morrill Act — which lead to the creation of Purdue University and many other institutions — is what caused the shift from education to training. Uncharitably, it said “this fancy book learnin’ is fine and all, but we need people to have useful skills.” “Useful”, of course, has a pretty strict definition.

Purdue’s College of Technology Dean Gary Bertoline said “there are plenty of high-skill, high-wage technology jobs available, but students just don’t have the skills necessary to fill them.” You know what skills are most lacking in tech these days? It’s not coding. It’s not database optimization. It’s ethics. I doubt that’s in the competency-based degree.

I’d like to see employers doing more to train their employees in the skills needed to perform the day-to-day work. Theory is important, and that’s a good fit for the university model. If you want a more streamlined approach, embrace vocational schools. Much of the work done these days that requires a college degree doesn’t need to. In fact, it might benefit from a more focused vocational approach that leaves graduates in less debt.

But universities should be catering to the needs of the student and the society, not the employer.

Is Slack revolutionary?

No, says Betteridge’s Law. But there are some who will argue it is. For example, Ben Thompson recently wrote “Zoom is in some respects a more impressive business, but its use-case was a pre-existing one. Slack, on the other hand, introduced an entirely new way to work”.

I don’t see that Slack introduced an entirely new way to work. What it did was take existing ways to work and make them suck less. When I joined a former employer, they were using consumer Skype for instant messaging and calls. It worked fairly well from the telephony side, but as a team IM client it was…bad. Channels weren’t discoverable, there were no integrations, and search (if it even existed, I don’t remember now) was useless.

When we switched to Slack, it was so much better than the way we had been working. But none of the concepts were new, they were just better executed. Many tools have attempted to address the use cases that Slack handles well. They just didn’t succeed in the same way. Does that make Slack revolutionary? Maybe it’s splitting hairs, but I could see an argument that Slack had a revolutionary impact without being revolutionary itself.

If a thank you note is a requirement, I don’t want to work for you

Jessica Liebman wrote an article for Business Insider where she shared a hiring rule: If someone doesn’t send a thank-you email, don’t hire them. This, to be blunt, is a garbage rule. I don’t even know where to begin describing why I don’t like it, so I’ll let Twitter get us started.

When I’ve been on the hiring team, a short, sincere “thank you” email has always been nice to receive. But I’ve never held the lack of one against a candidate. It’s not like we’re doing them some huge favor. We’re trying to find a mutually beneficial fit. And employers hold most of the power, in the interview process and beyond.

You can lament it if you want, but the social norm of sending thank yous for gifts is greatly diminished. So even if it would have been appropriate in the past, it’s no longer expected. And, as noted above, it’s culture-specific anyway.

Until employers see fit to offer meaningful feedback to all applicants, they can keep their rule requiring thank you notes to themselves. And even after that. If an employer wants to use arbitrary gates that have no bearing on performing the job function, I don’t want to work for them.

Protecting the privacy interests of others

Every so often, I think about privacy. Usually because Facebook or another large company has acted stupidly again. And I’ll admit that despite the lousy track record that many companies have, I make the choice to use their services anyway because I determine the value to outweigh the negatives. But not everyone makes that choice.

When we talk about protecting privacy, we generally talk about protecting our own privacy. But our privacy impacts the privacy of others. I got on this line of thought a while back while listening to This Week in Law (RIP) episode 440. They were talking about what happens to your digital property (e.g. email and social media accounts) after you die. While I won’t particularly care about what is said about me after I’m dead — I’ll be dead after all — it’s not just my content there.

Sometimes my friends tell me things about their lives. The most convenient way happens to be email or instant messaging. Now you can argue that these sorts of things should be discussed in a more secure manner, but that ignore the way people live their actual lives. Anyway, sometimes my friends tell me things that they wouldn’t necessarily want others to know. Secrets about relationships, desires, worries, etc.

If my accounts become available to someone else after my death, then so do the messages sent in confidence to me. And just because my friend felt comfortable confiding in me, that doesn’t necessarily mean they’ll feel comfortable with my estate knowing their secrets.

It’s a tricky situation. A generation or two ago, these sorts of things would be communicated in person, over the phone, or by written letter. Only the last of these would leave a record of the content, and even then they’re likely destroyed fairly soon. The ability to cheaply store communications en masse is both a blessing and a curse. Neither law nor societal norms have yet come to terms with this new world.

Where to file an issue?

Recently, the Fedora Engineering Steering Committee (FESCo) entertained a proposal to allow people to file issues in the repo where Fedora RPM spec files live. They ultimately rejected the proposal in favor of keeping those issues in Red Hat Bugzilla. I didn’t weigh in on that thread because I don’t have a set opinion one way or another, but it raised some interesting points.

First, I’d argue that Bugzilla is hostile for end users. There are a lot of fields, many of which aren’t meaningful to non-developers. It can be overwhelming. Then again, there probably aren’t too many end users filing bugs against spec files.

On the other hand, having multiple places to file bugs is hostile for users, too. “Where do I file this particular bug? I don’t know, I just want things to work!”

Having multiple places for bugs can be helpful to developers, so long as the bugs are filed in the right place. Spec file bugs make sense to be filed in the same place as the spec files generally. But they might make more sense elsewhere if they block another bug or fit into another workflow. And the odds of a bug being filed in the right place isn’t great to begin with.

This is a question for more than just Fedora though. Any project that has multiple pieces, particularly upstream dependencies, needs to think about how this will work. My take is that the place the user interfaces with the code is where the issue should be filed. It can then be passed upstream if appropriate, but the user shouldn’t have to chase the issue around. So if an issue manifests in my project, but the fault lies in upstream code, it’s my responsibility to get it fixed, not the user’s.

So now that I’ve typed all this out, I suppose I would argue that issues should be enabled on src.fedoraproject.org and that it’s the package maintainer’s responsibility to make the connection to Bugzilla where required.

Avoiding being a remote hermit

Last week I wrote a little bit about my experience working from home. I mentioned that I sometimes work from a local coworking space to get away from the noise of my kids. What I didn’t say is that I do it to be around people — because I don’t. I like being social, but I don’t feel like I miss anything working from home.

I leave the house more often than I’d probably choose to. I think my record is eight days without leaving the house, but it’s almost always much shorter than that. Sometimes it’s as simple as taking the kids to school in the morning. Other times I actually go do things with my friends. But I can’t say I’ve ever felt the need to work from not-house just to be around people.

Part of that is that I often interact with people over text (e.g. Twitter, instant messaging, etc) anyway. In my jobs, I’ve always been able to be relatively available online, so I’m able to keep in touch when I need interaction. And I often spent time on the phone or in video calls with people, so I got that higher-bandwidth interaction, too.

But I can see how someone freelancing or otherwise not interacting with coworkers regularly can quickly become a recluse. The Trello blog recently ran an article about avoiding becoming a hermit. I read it thinking “yeah, this is good advice but I take a slightly different approach.”

For example, I don’t dress up in “work clothes”. I wear shorts and a t-shirt when it’s warm and add more clothing when it gets colder. But I do have a rule that I won’t wear pajamas unless I’m sick. I don’t need slacks and a collared shirt to feel like I’m at work, but wearing pajamas is basically an invitation to not even bother.

I also don’t watch TV during the work day, with rare exception (hello, NCAA tournament!). But I do listen to podcasts. I frequently notice that I don’t really pay attention to what’s been said; they’re really more like background noise a lot of the time. Except when I need to focus on reading, the podcasts don’t really get in the way. I can even write with a podcast playing most of the time.

Overall, not leaving the house is one of the benefits of working from home for me. Life forces me out of the house enough, and I’m just social enough, that I can still get the human interaction I need. Your mileage may vary.

Working remotely or remotely working?

It’s been almost six years since I became a full-time telecommuter. While I won’t rule out working in an office again, it’s hard to imagine at this point. Offices are a nice place to visit, drink the free coffee and soda, and then leave.

I’ve worked for an entirely remote tech startup. I’ve worked as (as far as I know) the only true remotee in a 100-plus person division of a 130 kiloperson company. Now I work on a different continent from my manager on a project where some of the people I work with aren’t even employees of my company.

Across these different experiences, I’ve had both good and bad. But working remotely has not only been personally beneficial, I think it’s made me a better employee. Oh sure, there are times that I just sit there and stare at my screen blankly. Or I’ll absentmindedly surf the Internet instead of doing work. But I did that when I worked in an office.

But working from home means that when I can’t focus on work, I can step away for a few minutes to do laundry or vacuum or read a book to my kid. These short breaks where I can truly get away mean that I can focus that much better when I get back. I was never able to do that working in an office.

When I was working in marketing at Cycle Computing, I would sometimes mow the lawn during the work day. I didn’t need to be immediately available in case of emergency, and I found that the forced isolation of mowing the lawn made it easy to focus. I could do a lot of writing in my head as I mowed and immediately type it up when I got back inside.

I haven’t found that I need to be more disciplined working from home. I have a room with a door that I use as my office, so I have some physical separation between “work” and “home”. I did a few work-from-home days when I was at Purdue and each time I noticed that I was much more productive on those days because I didn’t get involved in a bunch of conversations that I didn’t need to.

These days, I sometimes leave the confines of the house to work from the coworking space I belong to downtown. The main motivation is that my kids are bigger and louder than they used to be, so days when they’re home, they make it difficult to concentrate. And sometimes it’s nice to have a cup of coffee that I didn’t have to make for myself. And now there’s science to back up my decision to stay out of the office.