I have no plans to move my main iMac to macOS Catalina, at least for the forseeable future. There are two key apps I use—Fujitsu’s ScanSnap scanner software and the Many Tricks’ accounting app—that are both 32-bit. In addition, there are changes in Catalina relative to permissions that make it somewhat Vista like and slow down my interaction with the system. (My MacBook Air is my “production” Catalina Mac, and I have an older retina MacBook Pro that I use for Catalina betas.)
But Apple really wants people to update to Catalina, so they let you know about Catalina…constantly, it seems. In System Preferences > Software Update, you’ll see this…
And while that’s annoying, it’s not nearly as annoying as the red “1” dot they stick on System Preferences, which will stare at you forever. I complained about this on Twitter, and as is often the case, some very bright people had solutions to the problem.
While it’s not the world’s loveliest box…ok, so it may be the world’s ugliest box…
…it’s been rock solid since day one. However, it’s aging and its CPU won’t be supported in an upcoming pfSense release, so I decided to replace it. (That way, I’ll have a spare if the new one breaks…at least until that unsupported version of pfSense is released.) Here’s the new box…
That’s a Protectlifanless Firewall Appliance with a quad-core Celeron J3160 CPU, 4GB of RAM, and 32GB of storage. And yes, it’s just a bit smaller and more elegant than my old box—the entire thing is roughly the size of my old box’s external cooling fan.
These images were automatically cropped from the master image (after I cropped that; more detail on what I did is coming in a follow-up post), via ImageMagick.
So this would be that post: How to auto-crop huge images using ImageMagick. If you’re not familiar with it, ImageMagick is a set of command-line tools to manipulate images. There are a number of ways to install ImageMagick, but I used Homebrew (brew install imagemagick).
I occasionally need to help one of our customers get the bundle identifier for a given app, for some purpose with one of our apps. While the task isn’t complicated—the value is stored in a file named Info.plist within each app bundle—it’s not something that’s necessarily easy to explain to someone who doesn’t have a lot of Mac experience.
I figured there must be a less-complicated solution, and I was right, though it’s probably higher on the geek factor. After some searching, I found this thread at Super User, which offers a number of solutions. The simplest—and always working, in my experience—was the very first one: Open Terminal and run this command:
osascript -e 'id of app "Name of App"'
The "Name of App" is replaced with the name of the app as it appears when hovering over its Dock icon. For Excel, for example, it’d be:
osascript -e 'id of app "Microsoft Excel"'
Run that command, and it returns com.microsoft.Excel, which is just what I need—I just have the customer copy the output and email it back to me.
I use a ScanSnap ix500 scanner to scan a lot of paper into PDFs on my iMac. And thanks to the ScanSnap’s bundled optical character recognition (OCR), all of those scans are searchable via Spotlight. While the OCR may not be perfect, it’s generally more than good enough to find what I’m looking for.
However, I noticed that I had a number of PDFs that weren’t searchable—some electronic statements from credit cards and utility companies, and some older documents that predated my purchase of the ScanSnap, at least based on some tests with Spotlight.
But I wanted to know how many such PDFs I had, so I could run OCR on all of them, via the excellent PDFPen Pro app. (The Fujitsu’s software will only perform OCR on documents it scanned.) The question was how to find all such files, and then once found, how to most easily run them through PDFPen Pro’s OCR process.
In the end, I needed to install one set of Unix tools, and then write two small scripts—one shell script and one AppleScript. Of course, you’ll also need PDFPen (I don’t think Pro is required), or some other app that can perform OCR on PDF files.
When this tip was first posted, it didn’t work right: The log command ignored the --start, --end, and --last parameters. Regardless of what you listed for parameters, you’d always get the entire contents of the log file. I’m happy to note that this has been resolved in macOS 10.13.4, as log now functions as expected:
$ log show --last 20s --predicate 'processImagePath CONTAINS[c] "Twitter"'
Filtering the log data using "processImagePath CONTAINS[c] "Twitter""
Skipping info and debug messages, pass --info and/or --debug to include.
Timestamp Thread Type Activity PID TTL
2018-03-30 09:26:15.357714-0700 0xc88a8 Default 0x0 5075 0 Twitterrific: (CFNetwork) Task <9AD0920A-7AE7-4313-A727-6D34F4BBE38F>.<250> now using Connection 142
2018-03-30 09:26:15.357742-0700 0xc8d7a Default 0x0 5075 0 Twitterrific: (CFNetwork) Task <9AD0920A-7AE7-4313-A727-6D34F4BBE38F>.<250> sent request, body N
2018-03-30 09:26:15.420242-0700 0xc88a8 Default 0x0 5075 0 Twitterrific: (CFNetwork) Task <9AD0920A-7AE7-4313-A727-6D34F4BBE38F>.<250> received response, status 200 content K
2018-03-30 09:26:15.420406-0700 0xc8d7a Default 0x0 5075 0 Twitterrific: (CFNetwork) Task <9AD0920A-7AE7-4313-A727-6D34F4BBE38F>.<250> response ended
Log - Default: 4, Info: 0, Debug: 0, Error: 0, Fault: 0
Activity - Create: 0, Transition: 0, Actions: 0
This makes it really easy to get just the time slice you need from the overly-long log files. You can use s for seconds, m for minutes, h for hours, and d for days as arguments to these parameters.
This article provides a nice overview on interacting with log and predicates to filter the output—there’s a lot you can do to help figure out what might be causing a problem.
I use this script on the top-level folder where I save all my Fujitsu ScanSnap iX500 scans. Why? Partly because I’m a geek, and partly because it helps me identify folders I might not need to keep on their own—if there are only a few pages in a folder, I’ll generally try to consolidate its contents into another lightly-used folder.
The script I originally wrote worked fine, and still works fine—sort of. When I originally wrote about it, I said…
I feared this would be incredibly slow, but it only took about 40 seconds to traverse a folder structure with about a gigabyte of PDFs in about 1,500 files spread across 160 subfolders, and totalling 5,306 PDF pages.
That was then, this is now: With 12,173 pages of PDFs spread across 4,475 files in 295 folders, the script takes over two minutes to run—155 seconds, to be precise. That’s not anywhere near acceptable, so I set out to see if I could improve my script’s performance.
In the end, I succeeded—though it was more of a “we succeeded” thing, as my friend James (who uses a very similar scan-and-file setup) and I went back-and-forth with changes over a couple days. The new script takes just over 10 seconds to count pages in the same set of files. (It’s even more impressive if the files aren’t so spread out—my eBooks/Manuals folder has over 12,000 pages, too, but in just 139 files in 43 folders…the script runs in just over a second.)
Where’d the speed boost come from? One simple change that seems obvious in hindsight, but I was amazed actually worked…
For a recent customer support question, I needed to know how long our app Witch had been running. There are probably many ways to find this out, but I couldn’t think of one. A quick web search found the solution, via ps and the etime flag.
With the pid, the command to find that process’ uptime is:
$ ps -o etime= -p "774"
The elapsed time readout is in the form of dd-hh:mm:ss, so Witch had been running for 11 days and a few hours and minutes. Note that you can combine these steps, getting the process ID and using it in the ps command all at once:
After getting this working, though, I wondered if it’d be possible to keep my backups from the first day of each month, even while clearing out the other dates. After some digging in the rsync man page, and testing in Terminal, it appears it’s possible, with some help from regex.
My backup folders are named with a trailing date and time stamp, like this:
The new bits, -not -regex ".*-01_.*" basically say “find only files that do not contain anything surrounding a string that is ‘hyphen 01 underscore.’ And because only backups made on the first of the month will contain that pattern, they’re the only ones that will be left out of the purge.
This may be of interest to maybe two people out there; I’m documenting it so I remember how it works!
In yesterday’s tip, See sensor stats in Terminal, I implied that installation of the iStats ruby gem was a simple one-line command. As a commenter pointed out, that’s only true if you already have the prerequisites installed. The prerequisites in this case are the Xcode command line tools. Thankfully, you can install those without installing the full 5GB Xcode development environment.
Here’s how to install the command line tools. Open Terminal, paste the following line, and press Return:
When you hit Return, you’ll see a single line in response to your command:
$ xcode-select --install
xcode-select: note: install requested for command line developer tools
At this point, macOS will pop up a dialog, which is somewhat surprising as you’re working in the decidedly non-GUI Terminal:
Do not click Get Xcode, unless you want to wait while 5GB of data downloads and installs on your Mac. Instead, click the Install button, which will display an onscreen license agreement. Click Agree, then let the install finish—it’ll only take a couple of minutes.
If you’re curious as to what just happened, the installer created a folder structure in the top-level Library folder (/Library > Developer > CommandLineTools), and installed a slew of programs in the usr folder within the CommandLineTools folder.