Actually, I was managing privately hosted collaboration web server in 2001 through 2004, that provided every _collaboration_ feature available in today's 'cloud' envirnment, but was as secure and reliable as the IT group wanted it to be. Vastly more manageable than 360 is. It worked quite wonderfully for multi-office. multi-organization projects, and was an excellent repository for any and all electronic project documents. Extremely fine grained control of access and reporting was available if the particular job required it, or the PM could set it up with looser restrictions and treat everyone in his team as a peer.
We did training jobs for crusie lines, water treatment projects for Los Angeles, and international coordination with equal faciliity. One of the nicest parts of the product we deployed was the inherent ability for non-IT staff to manage and organize their day to day work, and never accidentally lose a document. On top of that, the
_context_ and history of every file and version was maintained within the product. Deloitte was a big user globally. EMC owns that app now, and it's still available for either remote hosted, or private hosting scenarios.
So collaboration per se does not require a 'cloud' soultion, however vaguely that's defined. It's entirely possible and practical to do it securely inhouse for an organization of any major size, without having to deal with Terms of Service that claim unlimited rights to company property. since few people other than sole practitioners, partners, or C?? corporate officers have the legal authority to authorize that, any remote service that makes that a part of their ToS is legally unusable for the vast majority of potential users.
The second hyped 'feature' of cloud computing is access to remote CPU power for non-parallel program execution. Computational Fluid Dynamics, CFD, is probably the poster child for that. Doing CFD modeling takes a lot of processor time, it's readily distributable, and can be executed out-of-order and reassembled late. Just as movie frames can be rendered on a render farm using multiple PCs.
The problem is, that doing it on a vendor's server is not necessary, not efficient, and inherently insecure. A better solution was developed years ago -- it's just not one that a vendor can resell over and over again to the same people. It's called grid computing, and powers some of the most computationally intensive projects on the planet, from SETI@Home, to folding proteins for cancer research. The management and task distribution engine is done -- all a vendor needs to put together is the
client portion that runs on the background. Instead of the sectretarial PCs w/ 8 cores just running Outlook, they can run Outlook, plus the grid client, and produce productive rendering, cfd, or hydraulic modeling at the same time.
just for an example, I've got a series of obsolete desktops and laptops, raning from a P4 laptop running linux, to an netbook netbook, to a 6 year old Xi CAD box. some of these have come and gone, desktops died and been replaced, but over the past 4 years, they've done the equivalent of 19 years, 43 days of run time for "World community Grid", generating over 10 million points for the Help Conquer Cancer project, and picking up other tasks on their list when that job is slowed down.
Overall, that particular grid computing job has 609K members, running 2 million plus devices, and produces on average 228 YEARS of run time every calendar day.
How much work could be gernated by all the idle cores and all the idle cpu cycles in your organization?