The key trade is usually introduced via physical security in an ark/site/key crash course which explains its relationship to the ark trade and site trade. As Ranum claims, "the right way to integrate physical and logical security is to create an organization-specific audit team responsible for looking at building entries and exits, telephone and fax use, and Internet use (including system and firewall logs). Additionally, all those aspects taken together need a connection to human resources - so all access can be rapidly terminated for employees who leave or are dismissed." Thus this trade generalizes the key audit to include not just USB keys, keycards but also passwords, credentials, etc.
As Richard Feynman explains in the context of the space shuttle, a concept of what is safe/fair/done in any technical field ultimately depends on scientific method: testing, safety, process control, and engineering discipline fit together. See technique/technology/doctrine and test/code/align for more on these methodologies.
One of the worst problems is collecting vast amounts of information to act only post facto. Ranum chides those who collect enormous amounts of information that fails to ever be used, and sees security as a social issue, not a technical one, since no amount of software or hardware is going to circumvent human nature. “People still do stupid stuff like click on links in spam and hold the door open for strangers going into a key-only building,” he says. He also doesn't confine his comments to the small scale social problems of organizations but includes the larger scale: idiocies of immigration, airline security, the media, the government and fear mongering. He wrote, “the most dangerous attacks come from inside,” citing examples that included corporate espionage, Oklahoma City, Ruby Ridge and the attacks on Sept. 11. The bombers “were authorized individuals who were validated by the system,” because they used Virginia drivers' licenses. But a credential is not an identity.
Ranum argues also that execution control should replace the dominant methods of coping with malware. In an editorial regarding the July, 2006 attacks against the US State Department, which resulted in massive and undetected penetration, he notes that "the attack succeeded [by] transmitting data out through the firewall using a novel method wrapped in Secure Sockets Layer (SSL) the State Department's Intrusion Detection Systems (IDS) couldn't detect it, etiher. A custom attack is like having a bullet shot through your head by someone with a sniper rifle. You're dead before you have a chance to update your security posture." Thus such efforts as OVAL, which attempt to enumerate badness, are doomed to fail for the most important organizations that are likely to be subject to custom attacks, e.g. from Chinese military crackers.
Ranum lists six 'dumb' approaches, the worst of which is the default permit approach he describes as inherited from "the very early days of computer security," when "network managers would set up an internet connection and decide to secure it by turning off incoming telnet, incoming rlogin, and incoming FTP. However today for each legitimate use "there are dozens or hundreds of pieces of malware, worm tests, exploits, or viral code. Examine a typical antivirus package and you'll see it knows about 75,000+ viruses that might infect your machine. Compare that to the legitimate 30 or so apps that I've installed on my machine, and you can see it's rather dumb to try to track 75,000 pieces of Badness when even a simpleton could track 30 pieces of Goodness." Ranum's approach is artificial ignorance - "a process whereby you throw away the log entries you know aren't interesting. If there's anything left after you've thrown away the stuff you know isn't interesting, then the leftovers must be interesting. This approach worked amazingly well, and detected a number of very interesting operational conditions and errors that it simply never would have occurred to me to look for." Ranum also denigrates the penetrate and patch approach: "There are networks that I know of which have been "penetration tested" any number of times and are continually getting hacked to pieces. That's because their design (or their security practices) are so fundamentally flawed that no amount of turd polish is going to keep the hackers out. It just keeps managers and auditors off of the network administrator's backs. I know other networks that it is, literally, pointless to penetration test because they were designed from the ground up to be permeable only in certain directions and only to certain traffic destined to carefully configured servers running carefully secured software. Running a "penetration test" for Apache bugs is completely pointless against a server that is running a custom piece of C code that is running in a locked-down portion of an embedded system."
Ranum critiques also the ideas that hacking is cool, that one can educate users about security, or that action is better than inaction, which are all social biases.