Google Duplex and the future of phone calls

For the longest time, I would just drop by the barber shop in the hopes they had an opening. Why? Because I didn’t want to make a phone call to schedule an appointment. I hate making phone calls. What if they don’t answer and I have to leave a voicemail? What if they do answer and I have to talk to someone? I’m fine with in-person interactions, but there’s something about phones. Yuck. So I initially greeted the news that Google Duplex would handle phone calls for me with great glee.

Of course it’s not that simple. A voice-enabled AI that can pass for human is ripe for abuse. Imagine the phone scams you could pull.

I recently called a local non-profit that I support to increase my monthly donation. They did not verify my identity in any way. So that’s one very obvious way for causing mischief. I could also see tech support scammers using this as a tool in their arsenal — if not to actually conduct the fraud then to pre-screen victims so that humans only have to talk to likely victims. It’s efficient!

Anil Dash, among many others, pointed out the apparent lack of consent in Google Duplex:

The fact that Google inserted “um” and other verbal placeholders into Duplex makes it seem like they’re trying to hide the fact that it’s an AI. In response to the blowback, Google has said it will disclose when a bot is calling:

That helps, but I wonder how much abuse consideration Google has given this. It will definitely be helpful to people with disabilities that make using the phone difficult. It can be a time-saver for the Very Important Business Person™, too. But will it be used to expand the scale of phone fraud? Could it execute a denial of service attack against a business’s phone lines? Could it be used to harass journalists, advocates, abuse victims, etc?

As I read news coverage of this, I realized that my initial reaction didn’t consider abuse scenarios. That’s one of the many reasons diverse product teams are essential. It’s easy for folks who have a great deal of privilege to be blind to the ways technology can be misused. I think my conclusion is a pretty solid one:

The tech sector still has a lot to learn about ethics.

I was discussing this with some other attendees at the Advanced Scale Forum last week. Too many computer science and related programs do not require any coursework in ethics, philosophy, etc. Most of computing has nothing to do with computers, but instead with the humans and societies that the computers interact with. We see the effects play out in open source communities, too: anything that’s not code is immediately devalued. But the last few years should teach us that code without consideration is dangerous.

Ben Thompson had a great article in Stratechery last week comparing the approaches of Apple and Microsoft versus Google and Facebook. In short: Apple and Microsoft are working on AI that enhances what people can do while Google and Facebook are working on AI to do things so people don’t have to. Both are needed, but the latter would seem to have a much greater level of ethical concerns.

There are no easy answers yet, and it’s likely that in a few years tools like Google Duplex will not even be noticeable because they’ve become so ubiquitous. The ethical issues will be addressed at some point. The only question is if it will be proactive or reactive.

 

 

LISA wants you: submit your proposal today

I have the great honor of being on the organizing committee for the LISA conference this year. If you’ve followed me for a while, you know how much I enjoy LISA. It’s a great conference for anyone with a professional interest in sysadmin/DevOps/SRE. This year’s LISA is being held in Nashville, Tennessee, and the committee wants your submission.

As in years past, LISA content is focused on three tracks: architecture, culture, and engineering. There’s great technical content (one year I learned about Linux filesystem tuning from the guy who maintains the ext filesystems), but there’s also great non-technical content. The latter is a feature more conferences need to adopt.

I’d love to see you submit a talk or tutorial about how you solve the everyday (and not-so-everyday) problems in your job. Do you use containers? Databases? Microservices? Cloud? Whatever you do, there’s a space for your proposal.

Submit your talk to https://www.usenix.org/conference/lisa18/call-for-participation by 11:59 PM Pacific on Thursday, May 24. Or talk one of your coworkers into it. Better yet, do both! LISA can only remain a great conference with your participation.

The worst part of open source is code uber alles

If you know me, you know I’m an open source person. I use, contribute to, and advocate for open source software. I’ve written dozens of articles for Opensource.com. But open source has a big problem: open source communities tend to value code above all else.

Code is undeniably an important part of open source software. It’s hard to have software without code. But there’s a lot more to it.

Software doesn’t exist for its own benefit; it is written to serve the needs of people. This means that activities dealing with people are also critically important. Project management, design, QA, community management, marketing, et cetera are all people functions.

This isn’t to say that the people functions are more important than code. Without code, those functions don’t have a whole lot to do. But they all inform how the code is written, shared, and used. A project that only ships code is about as useful as a project that ships no code.

Open source projects need to write code. But they don’t need to diminish non-code contributions. And they particularly don’t need to diminish non-code contributors. And most importantly, they can’t accept bad behavior from a contributor just because they write a lot of good code.

Making the right tool for the job

A while back I came across a post where a developer took code that ran in 5 days and shortened it to 15 minutes. My immediate reaction was to treat it as “I was doing the wrong thing, so I stopped doing that and did the right thing instead.” But it wasn’t so simple. The developer clearly wasn’t an idiot.

When someone writes a new thing, I default to assuming they’re bad at Google or would rather spend their time writing unnecessary code than doing the thing they’re ostensibly trying to accomplish. That’s not always the case, of course, but I’ve found it to be a sane default over the years.

But in this case, the post’s author clearly thought through the problem. The tools he had available were unsuitable, so he made a new tool. It works on a much narrower set of problems than the existing tools, which is why it can be so much faster. But it’s not so narrow that it will only work for this one time. It’s a good mix of general utility and specific utility.

Book review: Forge Your Future with Open Source

If you are looking for a book on open source software, you have roughly a zillion options. Operating systems, languages, frameworks, desktop applications, whatever. Publishers have cranked out books left and right to teach you all about open source. Except for one small detail: how do you get started? Yesterday, The Pragmatic Bookshelf fixed that glitch. They announced the beta release of Forge Your Future with Open Source: Build Your Skills. Build Your Network. Build the Future of Technology by VM Brasseur.

I should disclose two things at this point: 1. VM is a friend and 2. I will receive a complimentary copy of the book in exchange for a technical review I performed. Now that I have fulfilled my ethical obligations, let’s talk about this book.

This is a very good book. It’s a book I wish I had years ago when I was first starting in open source. Brasseur covers understanding your motivations for contributing, determining requirements for a project you’ll contribute to, finding a project that matches those requirements, and getting started with your first contribution.

She assumes very little knowledge on the reader’s part, which is welcome. Don’t know the difference between copyleft and permissive licenses? That’s okay! She explains them both, including the legal and cultural aspects, without nudging the reader toward her preferred paradigm. Indeed, you’ll find no judgement of license, language, tool, or operating system choices. VM has no time for that in real life, so you won’t find it in her book either.

One of the better things about this book is that it is not really a technical book. Yes, it discusses some technical concepts with regards to code repositories and the like, but it puts great emphasis on the non-technical parts of contributing. Brasseur covers communication, community structure, and collaboration.

Forge Your Future with Open Source was not quite complete when I performed my technical review, but it was complete enough to know that this is an excellent book. Newcomers to open source will benefit from reading it, as will old hands such as myself. The final version will be published in June, but you can order a beta copy now through The Pragmatic Bookshelf.

Installing Lubuntu on a Dell Mini 9

My six year old daughter has shown interest in computers. In 2016, we bought a Kano for her and she loves it. So I decided she might like to have her own laptop. We happened to have a Dell Mini 9 from 2011 or so that we’re not using anymore. I figured that would be a good Christmas present for her.

Selecting the OS

The laptop still had the Ubuntu install that shipped with it. I could have kept using that, but I wanted to start with a clean install. I use Fedora on our other machines, so I wanted to try that. Unfortunately, Fedora decided to drop 32-bit support since the community effort was not enough to sustain it.

I tried installing Kubuntu, a KDE version of Ubuntu. However, the “continue” button in the installer’s prepare step would not switch to active. Some posts on AskUbuntu suggested annoying it into submission or not pre-downloading updates. Neither of these options worked.

After a time, I gave up on Kubuntu. Given the relatively low power of the laptop, I figured KDE Plasma might be too heavy anyway. So I decided to try Lubuntu, an Ubuntu variant that uses LXDE.

Installing Lubuntu

With Lubuntu, I was able to proceed all the way through the installer. I still had the “continue” button issue, but so long as I didn’t select the download updates option, it worked. Great success! But when it rebooted after the install, the display was very, very wrong. It was not properly scaled and the text was impossible to read. Fortunately, I was not the first person to have this problem, and someone else had a solution: setting the resolution in the GRUB configuration.

I had not edited GRUB configuration in a long time. In fact, GRUB2 was still in the future, so I had to find instructions. Once again, AskUbuntu had the answer. I already knew what I needed to add, I just forgot how to update the configuration appropriately.

Up until this point, I had been using the wired Ethernet connection, but I wanted my daughter to be able to use the Wi-Fi network. So I had to install the Wi-Fi drivers for the card. Lastly, I disabled IPv6 (which I have since done at the router). Happily, the webcam and audio worked with no effort.

What I didn’t do

Because I hate myself, I still haven’t set up Ansible to manage the basics of the configuration across the four Linux machines we use at home. I had to manually create the users. Since my daughter is just beginning to explore computers, I didn’t have a lot of software I needed to install. The web browser and office suite are already there, and that’s all she needs at the moment. This summer we’ll get into programming.

All done

I really enjoyed doing this installation, despite the frustrations I had with the Kubuntu installer. When I got my new ASUS laptop a few months ago, everything worked out of the box. There was no challenge. This at least provided a little bit of an adventure.

I’m pleasantly surprised how well it runs, too. My also-very-old HP laptop, which has much better specs on paper, is much more sluggish. Even switching from KDE to LXDE on it doesn’t help much. But the Mini 9 works like a charm, and it’s a good size for a six year old’s hands.

After only a few weeks, the Wi-Fi card suddenly failed. I bought a working system on eBay for about $35 and pulled the Wi-Fi card out of that. I figure as old as the laptop is, I’ll want replacement parts again at some point. But so far, it is serving her very well.

sudo is not as bad as Linux Journal would have you believe

Fear, uncertainty, and doubt (FUD) is often used to undercut the use of open source solutions, particularly in enterprise settings. And the arguments are sometimes valid, but that’s not a requirement. So long as you make open source seem risky, it’s easier to push your solution.

I was really disappointed to see Linux Journal run a FUD article as sponsored content recently. I don’t begrudge them for running sponsored content generally. They clearly label it and it takes money to run a website. Linux Journal pays writers and that money has to come from somewhere. But this particular article was tragic.

Chad Erbe uses “Four Hidden Costs and Risks of Sudo Can Lead to Cybersecurity Risks and Compliance Problems on Unix and Linux Servers” to sow FUD far and wide. sudo, if you’re not familiar with it, is a Unix command that allows authorized users to run authorized commands with elevated privileges. The most common use case is to allow administrators to run commands as the root user, but it can also be used to give, for example, webmasters the ability to restart the web server without giving them full access.

So what’s wrong with this article?

Administrative costs

Erbe argues that using sudo adds administrative overhead because you have to maintain the configuration file. It’s 2017: if you’re not using configuration management already then you’re probably a lost cause. You’re not adding a whole new layer, you’re adding one more file to the dozens (or more) you’re coordinating across your environment.

Erbe sets up a “most complicated setup” strawman and knocks it down by saying commercial solutions could help. He doesn’t say how, though, and there’s a reason for that: the concerns he raises apply to any technology that provides the solution. I have seen sites that use commercial solutions to replace sudo, and they still have to configure which users are authorized to use which commands on which servers.

Forensics and audit risks

sudo doesn’t have a key logger or log chain of custody. That’s true, but that doesn’t mean it’s the wild west. Erbe says configuration management systems can repair modified configuration files, but with a delay. That’s true, but tools like Tripwire are designed to catch these very cases. And authentication/authorization logs can be forwarded to a centralized log server. That’s probably something sysadmins should have set up already.

sudo provides a better level of audit logging compared to switching to the root account. It logs every command run and who runs it. Putting a key logger in it would provide no additional benefit. The applications launched with sudo (or the operating system itself) would need it.

Business continuity risks

You can’t rollback sudo and you can’t get support. Except that you can, in fact, downgrade the sudo version if it contains a critical bug. And you can get commercial support. Not for sudo specifically, but for your Linux installs generally.

Lack of enterprise support

This sems like a repeat of the last point, with a different focus. There’s no SLA for fixing bugs in sudo, but that doesn’t mean it’s inherently less secure. How many products developed by large commercial vendors have security issues discovered years later? A given package being open source does not imply that it is more or less secure than a proprietary counterpart, only that its source code is available.

A better title for this artice

Erbe raises some good points, but loses them in the FUD. This article would be much better titled “why authorization management is hard”. That approach, followed by “and here’s how my proprietary solution addresses those difficulties” would be a very interesting article. Instead, all we get is someone paying to knock down some poorly-constructed strawmen. The fact that it appears in Linux Journal gives it a false sense of credibility and that’s what makes it dangerous.

HP laptop keyboard won’t type on Linux

Here’s another story from my “WTF, computer?!” files (and also my “oh I’m dumb” files).

As I regularly do, I recently updated my Fedora machines. This includes the crappy HP 2000-2b30DX Notebook PC that I bought as a refurb in 2013. After dnf finished, I rebooted the laptop and put it away. Then while I was at a conference last week, my wife sent me a text telling me that she couldn’t type on it.

When I got home I took a look. Sure enough, they keyboard didn’t key. But it was weirder than that. I could type in the decryption password for the hard drive at the beginning of the boot process. And when I attached a wireless keyboard, I could type. Knowing the hardware worked, I dropped to runlevel 3. The built-in keyboard worked then.

I tried applying the latest updates, but that didn’t help. Some internet searching lead me to Freedesktop.org bug 103561. Running dnf downgrade libinput and rebooting gave me a working keyboard again. The bug is closed as NOTABUG, since the maintainers say it’s an issue in the kernel, which is fixed in the 4.13 kernel release. So I checked to see if Fedora 27, which was released last week, includes the 4.13 kernel. It does, and so does Fedora 26.

That’s when I realized I still had the kernel package excluded from dnf updates on that machine because of a previous issue where a kernel update caused the boot process to hang while/after loading the initrd. I removed the exclusion, updated the kernel, and re-updated libinput. After a reboot, the keyboard still worked. But if you’re using a kernel version from 4.9 to 4.12, libinput 1.9, and an HP device, your keyboard may not work. Update to kernel 4.13 or downgrade libinput (or replace your hardware. I would not recommend the HP 2000 Notebook. It is not good.)

Using the ASUS ZenBook for Fedora

I recently decided that I’d had enough of the refurbished laptop I bought four years ago. It’s big and heavy and slow and sometimes the fan doesn’t work. I wanted something more portable and powerful enough that I could smoothly scroll the web browser. After looking around for good Linux laptops, I settled on the ASUS ZenBook.

Installation

The laptop came with Windows 10 installed, but that’s not really my jam. I decided to boot off a Fedora 26 KDE live image first just to make sure everything worked before committing to installing. Desktop Linux has made a lot of progress over the years, but you never know which hardware might not be supported. As it turns out, that wasn’t a problem. WiFi, Bluetooth, webcam, speakers, etc all worked out of the box.

It’s almost disappointing in a sense. There used to be some challenge in getting things working, but now it’s just install and go. This is great overall, of course, because it means Linux is more accessible to new users and it’s less crap I have to deal with when I just want my damn computer to work. But there’s still a little bit of the nostalgia for the days when configuring X11 by hand was something you had to do.

Use

I’ve had the laptop for a little over a month now. I haven’t put it through quite the workout I’d hoped to, but I feel like I’ve used it enough to have an opinion at this point. Overall, I really like it. The main problem I have is that the trackpad has a middle-click, which is actually pretty nice except for when I accidentally use it. I’ve closed many a browser tab because I didn’t move my thumb far enough over. That’s probably something I can disable in the settings, but I’d rather learn my way around it.

The Bluetooth has been flaky transferring files to and from my phone. but audio is…well I’ve never found Bluetooth audio to be particularly great, but it works as well as anything else.

One other bit of trouble I’ve had is with my home WiFI. I bought a range extender so that I can use WiFi on the back deck and it to use the same SSID as the main router. The directions said you can do this, but it might cause problems. With this laptop, the WiFi connection becomes unusable after a short period of time. Turning off the range extender fixes it, and I’ve had no other problems on other networks, so I guess I know what I have to do.

One thing that really stood out to me is carrying it around in a backpack. This thing is light. I had a few brief moments of panic thinking I had left it behind. I’ve held lighter laptops, but this is a good weight. But don’t worry about the lightness, it still has plenty of electrons to have a good battery life.

Around the same time I bought this, I got a new MacBook Pro for work. When it comes to typing, I like the keyboard on the ZenBook way better than the new MacBook keyboards.

Recommendation

If you’re looking for a lightweight Linux laptop that can handle general development and desktop applications, the ASUS ZenBook is a great choice. Shameless commercialism: If you’re going to buy one, maybe use this here affiliate link? Or don’t. I won’t judge you.

One bad thing about the death of Flash

Adobe Flash is only mostly dead. That means it’s slightly alive. But the death of Flash is nigh. At least if you consider 2020 “nigh”. By and large, this is celebrated. Flash is notoriously riddled with vulnerabilities, it wrecks accessibility, etc. But losing Flash is still a little bit sad.

Not just because it pioneered interactive web content, as the Tech Crunch article above notes. But I think about all of the games and silly websites that will become unusable. Major projects will be converted to HTML 5. Sites that are mostly-video (I’m thinking Homestar Runner in particular) may end up as a recorded video that can be watched, but not interacted with.

But what about all of the little one-off sites? How much time did I spend in college playing miniputt.swf instead of studying for finals? (Spoiler alert: a lot) How many little educational games have been created that won’t get recreated?

Maybe the lost sites will be replaced by new projects. But it’s a concern we face with every file format: what happens when it’s no longer supported? We have centuries of printed records that can be analyzed by researchers. Centuries from now, will that be true of our digital artifacts?

This is an argument for using open formats instead of proprietary. But even that is no guarantee of future durability. An open format isn’t very helpful if no software implements it. I’m older than the JPEG and GIF standards. Will I outlive them, too? In the not-too-distant future, there may be a niche market for software that implements ancient technology for the purposes of historical preservation.