Has exploitation been demonstrated against the fundamental constructs of the debugging process?



  • I'm curious to know if an attacker can fundamentally exploit the debugging process.

    I'm not asking if specific debugging tools have been exploitable, surely some have, but rather whether the process of debugging - any and perhaps every debugging tool - is vulnerable to being lied to or used as an attack surface.

    If the answer is "it depends on which OS we're talking about", let's focus on the Windows debugging API. And if the answer is that the way Windows does it isn't reliable or secure but it fundamentally could be, then that's what I'd like to know.

    This relates to research I'm doing on the problem space of modern malware using polymorphism, emulator evasion, and obfuscation to defeat signature recognition.

    It doesn't matter if the attacker knows it's running in a debugger

    I'm fully aware that an attacker can detect they're running in a modified environment (whitepaper by Blackthorne et al. based on emulator detection but debuggers create the same scenario) (video) and use evasion strategies to hide the malicious behavior they might otherwise demonstrate on a target machine. However, I'm interested in the usage of a debugger (Qira, a "timeless" debugger by Google's Project Zero is of interest, although not decidedly so) to track the state of a process on the target machine so I can experiment with applying machine learning to develop signatures of polymorphic, obfuscated, and evasive programs.

    Because we're watching a process on the actual target rather than within an EV emulator sandbox, it doesn't matter that the attacker knows they're being watched by a debugger. The attacker has to execute the malicious behavior path within their code in order for the payload to accomplish its goal.

    My goal is detection of a signature - perhaps interruption of a detected process, but that's not mandatory. I'm looking at this problem space from the perspective of making a reliable way to create a signature for recognizing the polymorphic obfuscated payload for the purpose of identifying and responding to the threat rapidly, even if it already did the bad thing.

    Performance isn't of interest

    (So long as it's still feasible to run basic office programs performantly)

    I realize running a program in a debugger will cause a significant performance hit. That's an accepted trade-off of the research I'm interested in. We're assuming the defender is willing to pay higher hardware costs to run programs. If certain computationally expensive programs can't afford this overhead, the security measure wouldn't apply to certain hosts running those programs. Such hosts may be isolated in a specific network segment with that factor in mind.

    My question is this:

    Is there a fundamental flaw in the reliability of the way debugging can be accomplished? Is the way debugging happens vulnerable at the processor, kernel, or OS level? Is it possible for a properly designed debugger to reliably watch the state of a program in a production environment as it executes?



  • Debuggers are fundamentally ahead in this game. A perfectly written debugger will always be able to perfectly simulate a runtime environment where a perfectly written malware could never detect. Real life debuggers are complicated software that can have many vulnerabilities that may allow a particularly sophisticated malware to detect that it's running in a debugger, but in theory debugger have the inherent field advantage.

    However, one complicating situation here is that in the real world, poking holes in a real world software/hardware with bugs and writing malware to exploit those is a much easier task than debugging an unknown program and understanding whether it contains malicious behaviour. So in practice, malware do have lots of practical advantage.



Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2