My IT team spends a lot of time in log files. The daily rhythm is: pull the application logs off a machine, open the file, and start grep-ing for whatever errors might match the symptom a user reported. Trying to find the one that matters buried in ten thousand lines of noise.
The tools that exist for this are either too simple or too much. Plain grep works but gives you raw, unformatted walls of text with no context. On the other end, you've got tools like lnav or tailspin that either have a steep learning curve or only solve half the problem. The team didn't need a full TUI or just syntax highlighting. They needed grep, but smarter.
So I built loggrep.
The Problem
Here's what the actual workflow looked like before loggrep:
# Step 1: find the errors
grep -i "error" app.log
# Step 2: get overwhelmed by a wall of unformatted text
# Step 3: try to add context
grep -i "error" app.log -B 5 -A 5
# Step 4: now you have even MORE text with no color, no structure
# Step 5: try to narrow by time... manually
grep "2026-02-24 08:" app.log | grep -i "error"
# Step 6: give up and open the file in vim
Every troubleshooting session started this way. Chain a few grep commands together, pipe through less, squint at timestamps, manually try to correlate things. It worked, but it was slow and tedious.
The team had looked at alternatives. They tried a few log viewers. The feedback was always some version of the same thing: "too many features, too much setup, I just want to find the #@%& error." These are people who live in the terminal. They don't want a web UI or an Electron app. They want a command they can type from memory.
The Design Constraints
I set three rules for myself before designing and building:
It has to feel like grep. The team already knows grep. loggrep shouldn't require learning a new mental model. Same idea: you give it a file, you give it a pattern, it gives you lines. But it should do the tedious stuff automatically.
No config files. No setup. Install it, use it. If I have to write a getting-started guide longer than three lines, I've failed.
It has to be fast. This thing is going to process multi-gigabyte log files. If it's noticeably slower than grep, nobody will switch. I chose Rust for exactly this reason.
The Solution
loggrep is a single binary. You install it with cargo install, point it at a log file, and it immediately does the things you were doing manually:
Color-coded severity. ERR lines are red. WRN lines are yellow. INF lines are blue. It detects log levels from common formats automatically. No configuration. The visual hierarchy means your eye goes straight to what matters.
Time range filtering. Instead of grepping for timestamp strings and hoping the format matches, you just say --from "08:05" --to "08:10". It parses timestamps from multiple formats: bracketed datetimes, ISO 8601, syslog, JSON fields. Five minutes of logs, instantly.
Regex search with highlighting. Same -p flag as grep's -e, but matches are highlighted in the output. Combined with severity filtering, you can do things like "show me everything mentioning timeout or failed" in one command.
Stats summary. One flag gives you a breakdown: total lines, severity distribution with bar charts, time span, top recurring errors. During an incident, this is the first thing you run. It tells you the shape of the problem before you start digging.
Follow mode. Like tail -f but with all the filtering and coloring applied in real-time. Uses filesystem events (kqueue on macOS, inotify on Linux) instead of polling, so it's efficient.
Pipes from stdin. kubectl logs my-pod | loggrep -l error works exactly how you'd expect. So does journalctl -f | loggrep -l warn+.
Why Not Something Else
There are good tools in this space. I used most of them before building loggrep. The issue was never that they were bad - it was that none of them hit the exact intersection of what our support team needed.
The most full-featured terminal log viewer out there. Automatic format detection, SQL queries against log data, timeline view, multiple file merging. If you need deep analysis on well-structured logs, lnav is genuinely impressive.
Steep learning curve. It's a full TUI with its own keybindings, modes, and query language. The team wanted to type one command and get an answer, not open an interactive session. It also struggles with the messy, non-standard log formats we deal with daily - logs that don't follow syslog or JSON conventions.
Dead simple, exactly what the team wanted. Pipe any log through it and it highlights keywords, IPs, dates, UUIDs, HTTP methods. Zero config. Gorgeous output. If all you need is syntax highlighting for logs, tailspin is perfect.
It only highlights. No severity filtering, no time ranges, no stats, no regex search. You still have to grep separately and pipe the results through. The team needed the filtering and the visual output in one tool.
Already installed everywhere. Blazing fast. Composable with every other Unix tool. ripgrep is the fastest search tool period. The team already knew grep inside and out.
grep has no concept of log structure. It doesn't know what a timestamp is, what a severity level is, or how to summarize what it found. Every investigation required chaining 5+ commands together and manually parsing the output. grep is the answer to "find this string." loggrep is the answer to "what went wrong in the last hour?"
The gap isn't in any single feature. Every one of these tools does 2-3 things well. The gap is in combining all of them - color-coding, severity filtering, time ranges, regex, stats, JSON support, follow mode - in one command that doesn't require a TUI, a config file, or structured log formats. Most of my production logs are messy and non-standard. That's exactly where the existing tools fall short, and exactly where loggrep was designed to work.
loggrep isn't another log viewer. It's a log triage tool for ops people. The difference matters. A viewer is something you open and explore. A triage tool is something you run during an incident and immediately know where to look. That's the workflow I optimized for.
The Outcome
The team adopted it the same week I shared the binary. No training session, no documentation walkthrough. I Slacked them the install command, and within a day people were using it for active incidents.
The clearest signal that it worked: people started aliasing it. alias lg="loggrep". When someone makes a tool part of their muscle memory without being asked, you've built the right thing.
The best tools don't require buy-in. It gets adopted because it's the right tool for the job. loggrep doesn't do anything revolutionary. It just does the things you were already doing, but removes the friction.
What I Learned
Constraints are features. Every feature I didn't add made loggrep better. No config files means nothing to misconfigure. No plugin system means nothing to debug. No TUI means it works in every terminal, over SSH, inside tmux, piped to other commands.
Build for the workflow, not the tool. The team didn't need a "log viewer." They needed their existing grep-based workflow to be less painful. loggrep succeeds because it slots into the exact same muscle memory with zero adjustment.
Small tools compound. loggrep doesn't try to be an observability platform. It's one tool that does one thing. But because it plays well with pipes and other Unix tools, it becomes part of a larger toolkit. That composability is more valuable than any single feature I could have built into it.