Editor’s Note: This is a guest post by Ryan Lackey (@octal)

One important security technique is ensuring all actions leave a record, and that that record is reviewed by multiple parties.

While preventing bad actions is always desirable, sometimes that?s difficult while meeting performance, reliability, cost, or other goals. A reliable record of malfeasance supports investigations after the fact, allows additional protections to be added to address specific threats, and is an inexpensive backstop for a variety of systems. Independent review of those records helps ensure the records are accurate and that improper activities are stopped.

This principle has been part of how humans deal with security for a very long time. Double-entry bookkeeping, invented in the late 13th century in Italy, is one of the first examples — a clerk wouldn’t be able to simply pay someone money without leaving a clear record for the owner to see. Audit and financial controls within a company are the modern business manifestation — the employee who incurs the expense isn’t the one to approve and pay the expense report. In retail, stores provide printed receipts to customers, both to prevent fraud by cashiers and for taxation authorities to prevent tax fraud by the stores. All of these systems are built around making it hard to hide the records of one’s actions from independent review.

Consumerization of IT

A major trend in computing over the past decade has been the “consumerization of IT”. In many ways this has been good — a shift away from expensive, difficult to deploy, all-or-nothing systems toward smaller systems which can be selected, adopted, and maintained much closer to the end user. A general increase of performance and decrease in cost across all systems, a decreased level of training and increased ease of use, and rapid adoption of new technologies (mobile, tablets) are all contributing to an exciting time in the technology industry.

Unfortunately, one of the problems with the modern Software as a Service (SaaS) or public cloud model for applications is a weakening of this protection. Users have no visibility into back-end details of software running entirely on a provider’s remote systems. While that software could be great when the contract is signed, it’s possible for a company to weaken systems in a way invisible to the user — perhaps to save money, or to add a new feature, or to comply with external pressure, or merely by mistake. Even more alarming, a rogue employee at a provider could subvert systems without the knowledge of the provider or the customer. Or, an outside attacker could subvert systems, and without a robust record-keeping infrastructure, make changes which are difficult to detect and which hurt both the customer and the reputation of the provider.

The Private Cloud

Boxed software, the predominant model in the past, largely addressed this problem by shipping discrete releases on a periodic schedule, to be run on the user’s own infrastructure. This software, once it was handed over, could be reviewed by the buyer, sometimes including source code audits or third party security assessments and certifications. Most importantly, the publisher of the software couldn’t readily substitute a new software update without leaving a record — the binary application which would run on the client’s own hardware. If a publisher included backdoors, major vulnerabilities, or other malware in their software packages, the user could see what had happened, and could either improve their pre-deployment security assessment process, or switch software vendors.

Running on the client’s own infrastructure provided another protection. A smart deployment of packaged software included many interlocking layers of prevention, detection, and correction of improper behavior, both by software and by users. Firewalls could prevent sensitive systems from directly connecting to the outside world. External logging systems could create accurate records of the operation of systems, and independent systems could interoperate so the compromise of any one wouldn’t be sufficient to leak data. Backup tools could allow protection from catastrophic failures, as well as a historical record of when system configurations changed. While any one of these controls may have been subverted, the combination could be made robust.

To resolve this conflict between the cost savings and feature advantages of the public cloud, and the security advantages of the old in-house IT model, we can try another model: the private cloud. Software for the private cloud is used in much the same way as public cloud software, but is designed to be deployed to physical infrastructure within the enterprise.

Private Cloud File Syncing

Private cloud file syncing and collaboration has many advantages over public cloud file sharing solutions (particularly around network performance and overall reliability), but from a security perspective, the private cloud is much more compatible with the kind of robust security architecture a smart enterprise would want guarding their critical data assets. By being hosted within the enterprise, on a private cloud, the existing firewall, IDS, logging, and other controls can protect the private cloud deployment. Private cloud auditing services provide a simple way for Security Information and Event Management (SIEM) and log analysis software to observe activities at the file level and protect critical data.

Private cloud deployments still offer the benefits of the consumerization of IT, being cheap and easy to deploy, easy to use, and supporting a variety of platforms, while having good intrinsic security and great compatibility with existing security controls.

Ryan Lackey is a computer security expert, conference speaker, and founder of several startups. He is interested in new models of cloud security, hardware tamper-response and trusted computing, and security for mobile users in hostile environments. He is currently CEO of CryptoSeal, Inc. in San Francisco, CA.