Every once in a while, I pick up a new tool that makes my work days nicer. Here are three that I’ve started using regularly on a recent project. Maybe they’ll make your work days nicer, too.
##Find Your Files With Less Fuss
Born out of the Rust community, `fd` is a tool that plays the same role as `find`. Where `find` plays the part with a little bit of “👴🏼 get off my lawn,” `fd` is more “🍾 with your caviar?” It searches for files quickly, with defaults that make sense in a modern codebase.
File names are matched with Rust’s regular expression engine by default, which is usually more my speed than Bash-style glob matching. It also has a pretty slick `-x` option for running follow-up commands on all the files that it finds. You can do exactly the same thing with `find -exec` or `find | xargs`, but for some reason, `fd`’s syntax sticks in my head better.
Best part? It’s written in Rust. How cool is that?
## Search Files Quicker Than You Can Blink
`ripgrep` is a `grep`-like tool that has changed the way I navigate big codebases. It’s insanely fast–so fast that I’ve had to reevaluate my habitual search algorithm. Instead of keeping a detailed mental map of functions and filenames in my head, I just search. Nine times of of ten, I find exactly what I want faster than I could walk the file graph in my head.
What fuzzy-file-finding with command+p did for me in Sublime and Visual Studio Code, `ripgrep` has done for me at the command line. It gains its speed by making smart use of SIMD operations, Rust’s excellent regular expression engine, and some crazy good performance tuning.
It also smartly ignores everything in my `.gitignore` by default. If I ever need to search through something that I’ve otherwise ignored, I can tack `-u` onto the search to include ignored files.
The search is fast enough that I often find myself doing something that I used to avoid like mail-order fruitcake: searching through `node_modules/`…on purpose 😱. Documentation is great, but when you have wicked-fast searching capabilities, sometimes it’s easier to just read the source.
## Cut Through the JSON Undergrowth with jq
There’s an entire jungle of APIs peddling JSON out there. Some of them do really fun stuff. Some of them were written by you, and some by me. All of them share one thing in common. They’re kind of a pain to deal with at a command line.
`jq` (which we’ve [written](https://spin.atomicobject.com/2013/01/27/json-command-line-jq/) [about](https://spin.atomicobject.com/2014/08/26/bash-reusable-tooling/) [before](https://spin.atomicobject.com/2014/05/04/json-toolbox/)) makes parsing JSON at the command line downright pleasant. It has an easy, but powerful, composable syntax that makes short work of even complex tasks.
A few days ago, I had to take a small handful of JSON files that represented an educational curriculum and generate a mapping from a UUID key on level 6 of the nested object structure to a list of Title(s) on levels 6 and 7. The output needed to be in CSV format.
After briefly considering writing a little Ruby or Python script to parse it out, I pulled out `jq` instead and had the job done in about 10 minutes. `jq` even wound up having a handy `@csv` filter that turned my parsed list into a perfectly escaped CSV that I could hand off to a client.
If you haven’t given these tools a try already, check them out and let me know what you think.
Thanks Joe! These tools are so much faster and easier than the CLI tools I was using before for these purposes.
Comments are closed.