Vulnerability Research
Finding unknown bugs in software, hardware, and protocols — and characterizing them well enough to fix or weaponize.
Status: seed Related: Exploit Development, Ghidra, Researchers, Reading List
What it is
Vulnerability research is the discipline of finding bugs that nobody else has reported, understanding them precisely enough to know whether they’re exploitable, and turning that understanding into something actionable — a report, an advisory, a CVE, or a working exploit. See Exploit Development for what happens after the bug is in hand.
Compared to pentesting, the time horizon is longer (weeks–years), the scope is narrower (one target, one protocol, one component), and the deliverable is a new vulnerability rather than a catalog of known ones.
Methods
Five primary ways to find bugs, used in combination:
| Method | Best when | Watch out for |
|---|---|---|
| Manual code review | Source available; protocol parsers, privileged daemons, kernel drivers | Tunnel vision; reviewer fatigue |
| Patch diffing | Closed-source vendors with public patches (Microsoft, Apple, browsers) | Patches can hide multiple bugs; silent fixes |
| Fuzzing | Format parsers, network protocols, anything that takes structured input | Coverage plateaus; corpus poisoning; harness bugs |
| Variant analysis | After one bug is found, hunt for the same pattern elsewhere | Confirmation bias; missing the unique cases |
| Specification review | New / complex protocols; cryptographic systems | Specs lie; implementations diverge |
The research loop
A reasonable workflow for a new target:
- Map the attack surface. What inputs cross trust boundaries? Which entry points are reachable from low-privilege contexts?
- Characterize the most promising surface. Read the code or RE the binary. Build a mental model of how it parses, validates, allocates.
- Pick a method. Source review for a small driver; coverage-guided fuzzer for a parser; patch diff for a closed-source service after a Patch Tuesday.
- Find a candidate bug. Crash, anomaly, dangerous code path.
- Triage. Reproduce reliably. Determine root cause, not just the symptom.
- Assess exploitability. Is this a write-what-where? An info leak? Just a DoS? Where do mitigations stand on this code path?
- Report or weaponize. Vendor disclosure, advisory, CVE, or full exploit (see Exploit Development).
Bug classes worth internalizing
- Memory corruption — buffer overflow (stack/heap), use-after-free, double-free, type confusion, uninitialized memory, integer overflow → buffer mis-size.
- Logic — auth bypass, race condition / TOCTOU, state-machine confusion, deserialization, prototype pollution.
- Input handling — injection (SQL, command, template, LDAP), SSRF, XXE, path traversal, deserialization gadgets.
- Cryptographic — weak primitive, side channel, IV reuse, padding oracle, broken trust model.
- Concurrency — race condition, double-fetch, lock-order inversion.
A useful exercise: pick one bug class, find ten public CVEs in it, write a one-page summary for each. You’ll start seeing the shape of the class.
Taint analysis (the framing)
Most bug-finding reduces to tracing tainted data — untrusted input — from a source (where attacker data enters) to a sink (a sensitive operation that misuses it). The two flavors:
- Source-to-sink — start from untrusted input, follow the data, see what happens. Good with source code and dataflow tools (CodeQL, Semgrep).
- Sink-to-source — start from dangerous APIs (e.g.
memcpy,system,LoadLibrary, deserialization entry points), trace backwards to see if attacker data can reach them. Good for unfamiliar codebases and binary-only targets.
Tools and platforms
- Static analysis — CodeQL (multi-repo variant analysis), Semgrep (fast pattern matching), Joern (code property graphs).
- Reverse engineering — Ghidra, IDA Pro, Binary Ninja, BinDiff / Diaphora / ghidriff for patch diffing.
- Fuzzing — AFL++, libFuzzer, honggfuzz, WTF (Windows full-system), kAFL (kernel), boofuzz (network protocols).
- Dynamic analysis — Frida, time-travel debugging (TTD), DynamoRIO, PIN.
- Symbolic execution — angr, Triton, KLEE.
See Ghidra for one entry point and grow tool pages from there.
Disclosure
Once you have a bug, you choose:
- Coordinated disclosure — report to the vendor, agree on a deadline (commonly 90 days, mirroring Project Zero policy), publish after fix or deadline expiry.
- Bug bounty — submit through HackerOne / Bugcrowd / Intigriti / vendor program; payouts vary wildly by target.
- Brokered sale — Zerodium, Crowdfense, Trenchant, etc. Paid more, but you give up disclosure choice and ethical control.
- Full disclosure — publish without coordination. Almost always wrong; occasionally justified for unresponsive vendors with users at active risk.
The choice is not just ethical — it shapes what work you can publish and what doors open or close to you in the field.
References
- Project Zero blog — https://googleprojectzero.blogspot.com/
- A Bug Hunter’s Diary — Tobias Klein
- The Art of Software Security Assessment — Dowd, McDonald, Schuh
- Microsoft MSRC — https://msrc.microsoft.com/
- Zero Day Initiative — https://www.zerodayinitiative.com/blog
