With the publication last year of Aryeh Goretsky's paper “Twenty years before the mouse,” a personal perspective on the history of viruses and malware so far, I took the opportunity to try something a little different for this blog by announcing it here in an article in an interview format.
Since people seemed to like it, we thought we'd revisit the format to coincide (approximately) with the publication of another of his papers, on the vexed issue of Possibly Unwanted Applications: Problematic, Unloved and Argumentative. If you find it disconcerting to think of two of your favourite bloggers playing reporter and
victim interviewee, just think of it as a podcast that you can listen to sans headphones.
DH: It seems to me that in spite of much increased public awareness in the last decade or so, people in general are not good at distinguishing between types of malware, and even the wider security community sometimes accuses us (AV specialists) of having pseudo-religious arguments about classification and taxonomy instead of implementing that 100% detection they think we should be able to provide. However, one issue that seems to come up pretty often on forums and in the press is this: given the risks from destructive malware, password stealers and so on, "possibly unwanted" seems both vague and lacking in drama. Does adware and such really matter that much? (Rhyming responses are not necessary.)
AG: Actually, I'd say these types of programs are more an issue now than ever: It used to be anti-malware companies could make very binary decisions about whether a program was malicious or not. Classic computer viruses could be identified by their recursively self-replicating nature, Trojans by how they claimed to perform one set of actions but covertly performed others and the criminal gangs behind these programs could be similarly classified, as well. Today, the amounts of threats seen are magnitudes of order more than in previous decades, and the spectrum of these malicious codes—and the actors behind them—extends far beyond these easily-defined black-and-white categories to much grayer areas. We have to look at things like intent; potentials, possibilities and likelihoods of misuse; percentages of customers who may actually want to make use of such software and other criteria before deciding how to categorize such threats. (Rhyming response not given.)
DH: That makes sense to me, but then I've read the paper and I work in the industry. But for people who don't have those advantages, it leads to another question. If PUAs do matter that much, why do some vendors flag PUAs by default and others make them optional?
AG: That can depend on a number of factors, such as the intended audience for the product (business, consumer or mixed), what threats the vendor emphasizes detecting in their product literature, requirements from their customers and so forth. In ESET's case, the decision about whether to detect potentially unwanted applications is placed in the hands of the customer because we believe it is ultimately their choice, not ours, as to whether such programs should be allowed on their computers. On the opposite side of the fence, an otherwise legitimate program may be bundled with a component that is a PUA, but prompt the customer as to whether or not it should be installed. So, providing the customer with a choice is a concept which exists for some PUA vendors, as well. Of course, there are PUAs which are hidden, integral to a program or otherwise do not allow the user to make a choice about installing them.
DH: You might think this is a slightly disingenuous question, given that I'm a director of the Anti-Malware Testing Standards Organization (AMTSO), but surely that creates a problem with product testing?
AG: Yes, it does. Because criteria for detection as potentially unsafe or potentially unwanted applications varies with the vendor, as well as default settings for detection in their anti-malware programs, a test set containing these types of applications can return very different results depending upon how the anti-malware programs is configured. There is also the question of how the tester interprets these results. If an object is flagged during testing as potentially being a threat instead of definitely being one, does that count towards detection? Or could it be classified as a missed detection or even a false positive report?
DH: It's certainly an issue… In fact, there's an AMTSO guidelines document in preparation on selecting samples that will, hopefully, clarify things a little. (I'm supposed to be working on it this week, so glad of your input, as someone who's better acquainted with ESET product development than I am!) Do you think the PUA problem in general has grown in recent years?
AG: I know the problem has grown in recent years, largely from looking at the increased number of PUAs being listed in updates to ESET's threat signature database.
DH: Why do you think that is?
AG: Although the actions of potentially unsafe and potentially unwanted applications can be quite different from other types of malware, they typically have one trait in common: They are intended to make money for someone. Unlike some malicious activities which are unequivocally criminal, like botting a PC to make use of its resources, stealing sensitive information from it or ransom/blackmail-type scenarios, a legitimate software vendor might include a PUA component as a way of generating revenue. Be it a primary means or simply some supplemental income, which can be important for a software vendor as competition increases and revenue from licensing their product goes down. This gives the legitimate software vendor a way to underwrite continued development and maintenance for their software. This form of "sponsorship" might seem like a good deal to a software vendor, but it may also result in upset customers, depending upon the default installation options and behaviour of the PUA.
Author David Harley, We Live Security