There is no such thing as "good results" from using a CMS. In the average office, it takes 2.5 hours per day of every worker's time to "find information". That is acceptable to pointy hair types, who will consider it a "success" if CMS cuts it to 1.5 hours. That is not acceptable to a serious student of information architecture or interaction design. See deep interaction design for more on the basic problems of any CMS.
Rigorous naming conventions and strict editorial policy that keeps everything in the right place with the shortest name that can actually be linked directly in sentences on other pages, cuts that time to something like half an hour. Doing a few more things cuts that to something like ten minutes per day.
You can never ever do that with any CMS. CMS is not for information architects using the best enterprise taxonomy that they can figure out (see adopt target categories - all good category schemes follow the task or kaizen). CMS is for "script kiddies" who have never used a Unix shell and never written a real program. A government should not be allowed to sue them, for the following reasons:
Please put this in third person
 Startup problems
1. They are too hard to learn. I can show any idiot how to find a spelling or grammar error in Wikipedia, click "edit", fix the error, explain what they did on the edit summary line, and then click "save" and see the fix. I can do that, on the phone, in two minutes.
CMS imposes every kind of stupid barrier from "logging in" to requiring categorization to including dozens of weird tags on the edit screen that mostly don't matter. Tikiwiki, which is the best of them really, still puts lots of hard to understand things on edit pages.
The difficulty of learning creates MANY other problems. Training summer students and interns and co-ops to use a CMS is a waste of public time and money. However, many of them will arrive already knowing mediawiki, and if they don't, it's worthwhile teaching it to them so that they (not you, the paid government worker) can spend time updating any wrong information about Ontario in Wikipedia. Passing this benefit up should be a firing offense at any level of government. All governments benefit from citizens correcting widely read public sources without having to pay to do it. I am sure the NDP wants to hire 12000 people to do this, and raise taxes to 100 per cent of income. However, I am equally sure that this would elect the Tories and result in eliminating all Environment functions. Like Walkerton.
2. CMS has poor social software features, and prevents lateral relationships from developing between governmental agencies. It creates "information silos" that look exactly like internal political cliques (agencies share data only because their heads are friends, but not with those whose heads are disliked, etc.).
The lack of lateral or peer support causes MANY other problems. How much money do you think the US government's intelligence agencies, all together, spent on CMS from 1960 to 2005? Billions? On "support" of those "solutions"? Tens of billions? But the end result was, due to a lack of social features and data compatibility and skill portability, that even the information they were supposed to share, didn't get shared. Results? 9/11. Iraq. And the same problem goes on in domestic emergency response, resulting in Katrina, etc. When there's environmental disaster in Ontario, will Gord want to go the same way as Mike Brown?
The US tried dozens of ways to break the logjams and get their intelligence agencies cooperating. It all failed, costing more billions. What was the solution?
"I dig Intellipedia. It's wiki, wiki, baby!" They've put Austin Powers in charge, and are using mediawiki. And it's working fine.
Given that, anyone who chooses anything else must be fired. There are no exceptions. There are no alternatives. Fire everyone who won't use mediawiki. Anything else is going down a provably failed track.
Intellipedia's success suggests that project leaders should deal with whatever transparency and visibility compromises you have to, just to get started.
3. CMS is murder to set up, with insane configuration screens. Compare mediawiki, you can set up a free mediawiki on editthis.info in about two minutes. Try it. OK, now that you know it's true, so much for the getting-started problems. Now here are the keeping-going or changing-course problems.
 Long term problems
4. CMS data is in a proprietary format. There are no standards for data exchange at a higher level than raw XML. This isn't good enough. While it's very easy to move articles around in and out of Wikipedia or other mediawikis, and this very often does happen. It would be a grand thing indeed for the Ontario government to correct all Wikipedia pages on the flora and fauna of Ontario. Florence Devraux, I believe, actually got a project of this nature going in French Wikipedia (she is the famous user Anthere, and the Queen of All Wiki, and Mother of Trolls, and other honourable titles... oh and was elected to the first Wikimedia Foundation board, and created Wikipedia ArbCom, and so on...).
I cannot even list the horrors of relying on unique or proprietary data formats. They are so bad that every CMS vendor will tell you that they "obey the standard" but they won't tell you it's a bogus one that doesn't have any data in it, or something they just made up and invited other CMS vendors to "meet" (i.e. "obey").
Paying public money to create data in proprietary form should be illegal. There are efforts to make it so - all efforts to promote free software in public use also promote open data formats that can be exchanged with whatever future software comes up. Anything less violates the rules against single-source purchasing in government work.
5. CMS metadata (information about the information) is even worse. You can't even find the two pages that have to be integrated, let alone integrate them. It took FIVE YEARS for ALL THE TROLLS IN THE WORLD to create a good working ontology at Wikipedia (it's category names, it's conventions for naming events, etc.) and you've seen how hard it was in the GPC and how easy backsliding is. Mindless droolers often "try" to do it "better". They always just mangle it. It's fragile.
Organizing metadata is extremely difficult. Get it right, and you've got what they call semantic webs. Standards for this, called "ontologies", keep being proposed, but they are all fragile and too complex. A serious ontology would start with under 20 abstraction distinctions, all of which could be explained on one page to anyone capable of actually acting as an editor. See category:living ontology for exactly this.
The CMS ontology imposed on the data is barely good enough to keep web pages going. That is, for display.
However, it does nothing whatsoever for collaboration or compiling information (where things like Wikisources or Wikibooks shine). CMS ontology is all corporate-driven. Relying on it violates Jane Jacobs' Systems of Survival thesis: that private "trader" and public "guardian" activities cannot rely on ontologies or even ordinary distinctions that fit together, and so must avoid each other, even shun each other, lest corruption result. CMS is firmly in the trader realm.
CMS ontology is entirely unsuitable for public sector work, and astonishingly bad for environment work. They overload very bad words, usually with spatial metaphor. Mediawiki does this too, but at least it's possible to fix most of it, e.g. mediawiki gronks are documented, and many of the fixes can be done at install time. Try to do this for the average CMS, and you'd find that you can't even find other users' problems.
7. Even worse than being wrong, the ontology is also usually fixed. CMS flexible-ontology support is very very poor. Only drupal really attempts it, and they such a bad job of it that the URLs are all non-memorable. That tells you all you need to know. Adopting a global ontology like the Wikipedia namespace is probably the only real standard at the moment. Anything smaller will await outcomes of a number of projects working on restricted English vocabulary. There are lots of charlatans out there talking about "enterprise taxonomy", but they are fumbling in the dark. They have no concept whatsoever of how the information is going to be used, and their rules are worthless. Each industry needs its own separate ontology research efforts. For politics and environment issues there has only ever been one effort: living ontology. That is, the words we used in Imagine Halifax, lp.greenparty.ca and openpolitics.ca.
Can anyone who doesn't understand why living ontology matters be trusted do any user-collaborative web site whatsoever, or do anything on an intranet? It's just not the same problem as trying to pump out a very few peoples' work. Wiki is useful even for that, though.
8. CMS vendors lie. CMS integrators lie. CMS users are not efficient, not happy, and always embrace wiki if they get the chance.
There has never been a successful use of CMS. Serious web people always keep all their content in revision control systems (CVS, RCS), which is the same concept as wiki. They check in changes and check out the live version, and the live version is what appears on the web. If there's a problem, you check out an older version (or a set of older versions, CVS and RCS are very easy to script for) to "roll back" to an older version. No one know doesn't know how to use a shell account with RCS or CVS to run a web site is worth a conversation. The so-called CMS is simply a bad user interface on a bad proprietary revision control system. Wiki works because it is a straightforward implementation of revision control for the web. So realistically, I have to say, it's the proprietary crap that is new, and wiki that is the original way everything on the web was done.