An "evil developer attack" is a narrow example of an ''insider threat'': <ref>https://www.se.rit.edu/~samvse/publications/An_Insider_Threat_Activity_in_a_Software_Security_Course.pdf</ref>
<blockquote>Software development teams face a critical threat to the security of their systems: insiders.<br />
An insider threat is a current or former employee, business partner, or contractor who has access to an organization’s data, network, source code, or other sensitive information who may intentionally misuse this information and negatively affect the availability, integrity, or confidentiality of the organization’s information system.</blockquote>
In the case of software, a disguised attack is conducted on the integrity of the software platform. While this threat is only theoretical, it would be naive to assume that no major software project has ever had a malicious insider. {{project_name_short}} and all other open source software projects face this problem, particularly those that are focused on anonymity such as VeraCrypt, <ref>TrueCrypt has been discontinued.</ref> Tails, I2P, The Tor Project and so on.
A blueprint for a successful insider attack is as follows:
# Either start a new software project or join an existing software project.
# Gain trust by working hard, behaving well, and publishing your sources.
# Build binaries directly from your sources and offer them for download.
# Attract a lot of users by making a great product.
# Continue to develop the product.
# Make a second branch of your sources and add malware.
# Continue to publish your clean sources, but offer your malicious binaries for download.
# If undetected, a lot of users are now infected with malware.
An evil developer attack is very difficult for end users to notice. If the backdoor is rarely used, then it may remain a secret for a long time. If it was used for something obvious, such as adding all the users to a botnet, then it would be quickly discovered and reported on.
Open source software has some advantages over proprietary code, but certainly not for this threat model. For instance, no one is checking if the binaries are made from the proclaimed source and publishing the results, a procedure called "deterministic builds". <ref>https://mailman.stanford.edu/pipermail/liberationtech/2013-June/009257.html</ref> <ref>https://gitlab.torproject.org/legacy/trac/-/issues/3688</ref> This standard is quite difficult to achieve, but is being worked towards. <ref>Interested readers can investigate its complexity by searching with the phrase "trusting trust".</ref>
While most security experts are focused on the possibility of a software backdoor, other insider attacks can have equally deleterious effects. For instance, the same methodology can be used to infiltrate a targeted project team but in a role unrelated to software development; for example, as a moderator, site administrator, wiki approver and so on. This approach is particularly effective in smaller projects that are starved of human resources.
Following infiltration, disruption is caused within the project to affect productivity, demoralize other team members and (hopefully) cause primary contributors to cease their involvement. For example, using a similar blueprint to that of the evil developer attack, a feasible scenario is outlined below:
# Join an existing software project as a general member.
# Gain trust by working hard, behaving well, assisting readily in forums, making significant wiki contributions and so on.
# Attract a lot of community admiration by outwardly appearing to be a bona fide and devoted project member.
# Eventually attain moderator, administrator or other access once team membership is extended. <ref>The time period is likely to be shorter for smaller projects, perhaps less than 12 months.</ref>
# Continue to behave, moderate and publish well.
# Once trust is firmly established, subtly undermine the authority, character and contributions of other team members. <ref>For example, by casting unjustified aspersions.</ref>
# If the insider threat is undetected for a significant period, this can lead to a diminished software product due to a fall in contributions in numerous domains and team ill will.
The insider threat nicely captures how difficult it is to trust developers or other project members, even if they are not anonymous. Further, even if they are known and have earned significant trust as a legitimate developer, this does not discount the possibility of serious mistakes that may jeopardize the user. The motives and internal security of everyone contributing to major software projects like Tor, distribution developers and contributors, and the hundreds of upstream developers and contributors is a legitimate concern. <ref>In the case of {{project_name_short}}, binaries are not distributed nor created. Only unmodified upstream binaries are distributed, along with shell scripts. This claim is much easier to verify than if {{project_name_short}} were distributing binaries from project source code.</ref>
The trusted computing base of a modern operating system is enormous. There are so many people involved in software and complex hardware development, that it would be unsurprising if none of the bugs in existence were intentional. While detecting software changes in aggregate may be easy (by diffing the hash sums), finding and proving that a change is a purposeful backdoor rather than a bug in well designed source code is near impossible.