Apple knows better than you: iPhone battery edition

Last week, Apple confirmed what some users had long suspected: iPhone performance is deliberately throttled. The reasoning is sound: as iPhone battery performance decreases, the CPU performance scales back to extend battery life. It’s a sensible action to take, assuming you prefer battery life to performance. The assumption is the issue, though. Apple didn’t let the user make the decision. Apple decided that battery life is more important than performance and didn’t bother communicating this to the user.

Apple’s problem is not the decision, but the implementation. I suspect that the majority of iPhone users (or any smartphone for that matter) prize battery life over CPU speed. Most of them probably aren’t pushing their CPU on the regular. Preferring battery life is a sane default. Giving the user a choice is better. Making it clear that it’s happening is the bare minimum.

This is one of those credibility-risking moves that Apple likes to make. And in fairness, they generally come out ahead. Apple has long recognized the value in simplicity. Fewer options means less complexity. This makes users happier, even if they think they want a knob for everything.

But this particular case may be a little bit different. Users noticed an apparent slowdown. Cynics said it was to encourage people to buy the latest model. It wasn’t until an iPhone owner benchmarked his phone and had proof that Apple admitted the slow down. Even though their reasoning makes sense, it’s hard to shake the narrative that they’re pushing their customers into making new purchases. If it were really about battery life, why did they need to be forced into admitting it?

I’m inclined to give Apple the benefit of the doubt. After all, the company has a history of smug superiority. And much of the time, they do know better than their customers. That doesn’t mean they can’t screw it up sometimes. And this time, I think they did. We’ll see what comes from the lawsuits.

What acquisition means for Shazam

I was surprised to see the news that Apple is acquiring Shazam. After all, they’re a devices company, right? Maybe not, as the “services” division is the second-strongest line and growing. So what does Shazam do to help Apple? Two things that I see.

The first is that it gives them an avenue for selling music. Hear a song and wonder what it is? Fire up Shazam to identify it and here’s a handy link to buy it in the iTunes store. Right now (at least on Android), users have a choice between Google and Amazon for track purchases. You have to think Apple would want to get in on that. It’s a prime opportunity for impulse buys.

The second benefit is that it gives Apple more data about the songs people are interested in. The utility of this data is not immediately obvious to me, but I’m sure someone in Apple’s spaceship can figure out how to put it to use. Can they execute on that idea, though? I admittedly don’t pay a lot of attention to Apple, but they don’t seem to have the data chops of Google or Amazon.

But the title of this post is what the acquisition means for Shazam, not what it means for Apple. My first thought was “well I guess I won’t be able to use Shazam anymore.” Most of Apple’s software acquisitions have been focused on Siri or Apple Maps. Neither of those are available outside of the Apple ecosystem. CUPS (yes, the Unix print system) is the only acquisition that remains available outside of Apple, as far as I can tell.

Apple has no real desire to make it’s software available to non-iOS/macOS users. iTunes is a notable exception, but for the most part, you can’t expect Apple software outside of Apple hardware. Apple makes its money on services and hardware sales, not on software. And I can’t fault them for sticking to what works.

The question remains: will Shazam continue to be available across platforms? If Apple’s motivation is primarily to use it as an iTunes sales engine, I think it will. If they want to use it as a differentiator in a competitive smartphone market, they won’t. I’m inclined to favor the sales engine scenario, but time will tell.

Will Apple get tangled up in wireless headphones?

Last week, Apple announced the latest version of their flagship product. The iPhone 7 will begin shipping to customers on Friday and it will be the first to not have a headphone jack. The 3.5mm jack, which has been around since at least 1964, is the standard appearing on computers, music players, phones, some airline seats, and more. The standardized technology means you can use one set of headphones in any of those places without hassle (except for detangling the cords, of course).

But no more, says Apple. They used “courage” to describe this decision, a phrasing that has been soundly mocked. Courage probably isn’t the right word, but it’s certainly bold. This is a big risk that Apple hopes lays the foundation for additional changes that will lead to an inarguably better product. Of course, it might serve to further put the brakes on plateauing sales and a growing sense of meh.

Apple supporters are quick to point out that the doomsayers were wrong about Apple’s decision to remove floppy drives, CD drives, and ethernet ports. This feels like a different scenario, though. In previous cases, there was always something better to use instead (though I still wish the MacBook Pro I use at work had a wired ethernet port). Particularly by the time the optical drive was killed, USB drives and network services met the needs of the average consumer much better.

What’s the better option for the iPhone 7? Purchasing headphones that can only be used with Apple products, that require charging every few hours, that can’t be used while the phone is charging without an additional adapter? Will the technology used by these wireless headphones avoid the lag and disconnection issues that can frustrate Bluetooth device usage? Will noisy spectrum become an issue in crowded spaces like buses and subways? Will people be able to avoid losing them?

Apple’s previous removals proved to be successful enough that other manufacturers followed suit. But that success was possible in part because better standard solutions were available. This time, there’s no standard; it’s Apple or nothing. I don’t see that there’s a compelling enough story for the average consumer to support this as a long-term change. I’m no soothsayer, and I could end up complete wrong. But I bet Samsung really wishes they could have a do-over on the Galaxy Note 7’s battery: it could have been a great chance for them to take some of Apple’s market share.

Parsing SGE’s qacct dates

Recently I was trying to reconstruct a customer’s SGE job queue to understand why our cluster autoscaling wasn’t working quite right. The best way I found was to dump the output of qacct and grep for {qsub,start,end}_time. Several things made this unpleasant. First, the output is not de-duplicated on job id. Jobs that span multiple hosts get listed multiple times. Another thing is that the dates are in a nearly-but-not-quite “normal” format. For example: “Tue Mar 18 13:00:08 2014”.

What can you do with that? Not a whole lot. It’s not a format that spreadsheets will readily treat as a date, so if you want to do spreadsheety things, you’re forced to either manually enter them or write a shell function to do it for you:

function qacct2excel { echo "=`date -f '%a %b %d %T %Y' -j \"$1\"  +%s`/(60*60*24)+\"1/1/1970\"";

The above works on OS X because it uses a non-GNU date command. On Linux, you’ll need a different set of arguments, which I haven’t bothered to figure out. It’s still not awesome, but it’s slightly less tedious this way. At some point, I might write a parser that does what I want qacct to do, instead of what it does.

It’s entirely possible that there’s a better way to do this. The man page didn’t seem to have any helpful suggestions, though. I hate to say “SGE sucks” because I know very little about it. What I do know is that it’s hard to find good material for learning about SGE. At least HTCondor has thorough documentation and tutorials from HTCondor Week posted online. Perhaps one of these days I’ll learn more about SGE so I can determine whether it sucks or not.

Online learning: Codecademy

Last week, faced with a bit of a lull at work and a coming need to do some Python development, I decided to work through the Python lessons on Codecademy. Codecademy is a website that provides free instruction on a variety of programming languages by means of small interactive example exercises.

I had been intending to learn Python for several years. In the past few weeks, I’ve picked up bits and pieces by reading and bugfixing a project at work, but it was hardly enough to claim knowledge of the language.

Much like the “… for Dummies” books, the lessons were humorously written, simple, and practical. Unlike a book, the interactive nature provides immediate feedback and a platform for experimentation. The built-in Q&A forum allows learners to help each other. This was particularly helpful on a few of the exercises where the system itself was buggy.

The content suffered from the issue that plagues any introductory instruction: finding the right balance between too easy and too hard. Many of the exercises were obvious from previous experience. By and large, the content was well-paced and at a reasonable level. The big disappointment for me was the absence of explanation and best practices. I often found myself wondering if the way I solved the problem was the right way.

Still, I was able to apply my newly acquired knowledge right away. I now know enough to be able to understand discussion of best practices and I’ll be able to hone my skills through practices. That makes it worth the time I invested in it. Later on, I’ll work my way through the Ruby (to better work with our Chef cookbooks) and PHP (to do more with dynamic content on this site) modules.

CNET considered harmful

In my younger days, I made great use of CNET’s download.com website. It was an excellent tool for finding legal software. Apparently, it has also become an excellent tool for finding malware. An article posted to insecure.org describes how CNET has begun wrapping packages with an installer that bundles unwanted, potentially malicious software with the desired package.

This is terrible, and not just for the obvious reasons. It’s bad for the free software community because it makes us look untrustworthy. There’s a perception among some people (especially in the business world) that software can only be free if it’s no good. I suppose that’s one reason some in the community use “libre” to emphasize the free-as-in-freedom aspect. (Of course, not all free-as-in-beer software is free-as-in-freedom. That’s another reason the distinction can be important.)

When this conveniently-bundled malware causes problems for users, it’s not CNET who gets the blame. Users will unfairly blame the package developer, even though the developer had nothing to do with it. For well-established and well-respected packages like nmap, this reputation damage may not be that important. For a new project just getting started — or for the idea of free software in general — this can be devastating.

My thoughts on the Mac App Store

This post proves that this is not a newsy blog.

A few weeks ago, I upgraded my MacBook Pro to Mac OS 10.6.6. With this upgrade, came AppStore.app, the desktop equivalent to the App Store that’s been a large part of the success of iOS. My first impression was “this looks like Novia’s Ovi Store” — it shows a lot of applications and very little information. Looking around, it seems pretty easy to use, but I can’t see myself ever using it.

After years of installing software via `yum install $package`. I got some flak on Twitter for saying this, but the flak was crap. First, I wouldn’t expect anyone to read the man pages for a GUI app on any platform. That’s what the built-in documentation is for (and if it doesn’t exist, that’s a serious bug in the program). Secondly, I wasn’t even talking about the interface. It’s more the idea of paying for the software. Not out of greed, but out of the philosophical feelings about FLOSS.

That having been said, I think the App Store is pretty great overall. My big complaint about Mac OS X is the lack of a package management system. The ability to easily keep packages up to date is a serious strength of Linux distributions, and things like MacPorts and Fink don’t really cut it for casual users. I hope that Apple does the un-Apple thing and makes it more accessible to developers. In the meantime, it’s a great and overdue addition.

My TTYtter configuration

It’s been many months since I found out about TTYtter, a command line Twitter client written in Perl.  Though some users might bemoan the lack of a snazzy graphical interface, it is that very lack which appeals to me.  TTYtter places only a very tiny load on system resources, which means my Twitter addiction won’t get in the way of running VMs to test various configurations and procedures.  Being command-line based, I can run it in a screen session which means that I can resume my Twittering from wherever I happen to be and not have to re-configure my client.

I don’t claim to be a TTYtter expert, but I thought I’d share my own configuration for other newbs.  TTYtter looks in $HOME/.ttytterrc by default, and here’s my default configuration:

#Check to see if I'm running the current version
vcheck=1
# What hash tags do I care about?
track='#Purdue #OSMacTalk #MarioMarathon'
# Colors, etc are good!
ansi=1
# I'm dumb. Prompt me before a tweet posts
verify=1
# Use some readline magic
readline=1
# Check for mentions from people I don't follow
mentions=1

Of course, there are certain times that the default configuration isn’t what I want.  When I was reading tweets in rapid-fire succession during the Mario Marathon, I didn’t want non-Mario tweets to get in the way, so I used a separate configuration file:

# Don't log in and burn up my rate limit
anonymous=1
# Find tweets related to the marathon
track=#MarioMarathon "Mario Marathon"
# Don't show my normal timeline
notimeline=1
# Colors, etc are awesome!
ansi=1
# Only update when I say so. This keeps the tweet I'm in the middle of reading
#      from being scrolled right off my screen
synch

There are a lot of other ways that TTYtter can be used, and I’m sure @doctorlinguist will tell me all of the ways I’m doing things wrong, but if you’re in the market for a new, multi-platform Twitter client, you should give this one a try.

Using Mac’s nvram(8) command

I recently came across the nvram(8) command included in OS X.  nvram is used to manipulate the settings of non-volatile RAM, which persists after reboots and power off.  From what I’ve seen, there are about 50 variables that are meaningful to the system, but I haven’t found a comprehensive list so far.  So what is this command used for?  That’s a good question.

One thing you can do is set arbitrary asset tags.  If your organization uses a central asset-tagging system, you can write the asset tag to NVRAM.  You can also set contact information like your name and e-mail address. Of course, none of these options are a guarantee you’ll recover a lost or stolen system. Assuming someone even thinks to look at nvram, the variables could be changed or deleted, or the whole NVRAM could just be wiped.

I asked Twitter if anyone had uses for nvram(8) and no one seemed to.  I’ll leave it open to my readers to suggest uses for this command.

Filename extensions can cause problems

Most people don’t really give much thought to the idea of file extensions, although they’re nearly universally in the minds of modern computer users.  Users have come to understand that .pdf means a file is in the Portable Document Format, or that .ppt is a Microsoft PowerPoint file.  DOS users recall that files ending in .exe, .com, or .bat are executable. For those unknown extensions, there’s the very helpful filext.com website.  There’s no doubt that filename extensions can provide very helpful information, but here’s the issue: not all platforms care about them.  That’s not a problem in all cases, but there are times when it makes life miserable.

Filename extensions can be just another part of the filename, or they can be entirely separate namespace.  DOS first introduced the idea of extensions to the general public.  In those days, the file had a name of up to eight characters, and an extension of up to three.  This “8.3”  convention persisted into Windows, and is still commonly seen on Windows system files, even though it is no longer necessary.  Unix-based systems, such as Mac OS X and Linux, have no feelings about extensions — they’re certainly not required, but some applications make use of them.  The dominance of Windows in the desktop market has encouraged application writers to really care about extensions, and it does help in trying to find the right type of files.

Here’s where it becomes problematic.  Because some systems don’t care about extensions, it’s easy to not have extensions on your filename.  Then, when you go to a system that does care, things don’t work as you expected.  Here’s a fine example: my wife needed to have a few pictures printed, so she loaded them onto an SD card and took them to the store. When she got there, the photo system would not find any of the pictures.  As it turns out, she had saved them without the .jpg extension, so while they were valid JPEG files, the system didn’t try to load them.

Now, most photo software, cameras, etc. will add the extension out of tradition (and because that’s what people expect). However, a manual renaming of the files after the fact could result in absent extensions.  So what is the solution?  Well, we’ll never get all platforms to come to agreement on what filename extensions are, and how they should be defined and treated.  The only answer, then, is that applications should be written to not focus on extensions, but on the contents of the file.  If applications used methods similar to the Unix file command to determine file type, then such problems could be avoided.