The real web 3.0
Nothing about the so-called web 3.0 is new. It was described effectively enough by Trygve Reenskaug in the 1980s as the model-view-controller abstraction. In modern terms, the first public web, or web 1.0 simply enabled viewing: it was a web of hypertext (a blurb distributed over HTTP and encoded in HTML). Then the web 2.0 technologies emerged to provide unified control verbs. The APIs, e.g. SOAP, RESTful and XML-RPC, driving web services, leaving only the model. Thus web 3.0 provides typed links and at least weak ontology (or folksonomy) to unify these into a knowledge model, typically by way of the RDF Data Model (which has RDF/XML, RDF/N3, RDF/Turtle, RDF/TriX etc. as instance data interchange/serialization formats, but who really cares? See semantic wiki and mediawiki monoculture for more on unifying technologies, and semantic mediawiki for the most likely tool to be used in this wiki soon.
Technically and economically, we can describe the real web 3.0 simply as follows.
 The three Rs
"The best thing to do, for a small developer, is to lend support to those standards that are simple enough to be understandable. Use RELAX NG instead of the wretchedly overcomplicated XML Schema. Use XML-RPC instead of SOAP, and if you need a more complex interface than XML-RPC can handle, skip SOAP and design it following the REST principles." - Andrew Kuchling, 2002 
It also pays to avoid SKOS, SIOC or anything built on FOAF which are low integrity models. DOAP also must be avoided for now. Feel free to exploit BACnet though, it works. And webDAV and wiki seem headed for a reconciliation.
"...when we started working on the design for a large image distribution and processing system, we already had a simple and scalable design, and the tools to support it. Just send XML documents representing objects back and forth over HTTP, and use the lightweight DOM structure to hold parsed versions of them inside the application. Add some glue code to let application code access the DOM structures as ordinary Python objects, and you have a complete and scalable system." - Fredrick Lundh, author of a SOAP library for Python, 
"RELAX NG -- which is an XML schema standard that competes with WXS. It is designed more for document-style XML than for XML born in programming data. It supports type annotations, but only as separate optional modules (which can include WXSDT). The bohemians insist that the next-generation XML technologies should not only learn from RELAX NG's isolation of class consciousness, but should avoid bias toward WXS, supporting RELAX NG and other alternatives as well. The battle rages on at present.
Certainly, if you want your data to outlast your code, and to be more portable to unforeseen, future uses, you would do well to lower your own level of class consciousness. Strong data typing in XML tends to pigeonhole data to specific tools, environments and situations. This often raises the total cost of managing that data." - Uche Ogbuji, in a 2006 article
"REST" is a way of organizing web services around URIs and other web technologies instead of using RPC systems like SOAP... One fundamental idea promoted by REST advocates is that HTTP is not simply a way to get bits from here to there but instead is an application protocol, with the methods GET, POST, PUT, and DELETE, just like a file system has a few fundamental actions. (Some applications may need a few more actions, which is the idea behind DAV ... a REST architecture that extends HTTP/1.1 to add support for metadata properties, locking, and namespaces.)" - Andrew Dalke
 The new DOS
- democratic domains run by users setting all standards, keeping control of their own common data
- open configuration letting any group of users transit old applications to hosts supporting them
- sociosemantic webs compiled into semantic webs specific to a formal application protocol
 Democratic domain
The domain holders recognized by ICANN are under no obligation at all to respect any rational or reasonable expectation of end users who use domain namespace as a guide to the meaning of words in URIs. Domains are treated as property. This is further reinforced by opaque URI dogma which requires all words used in URIs including the domain name to be utterly meaningless and to derive meaning only from the content. This does not match how URIs are used in practice, and it cedes the mediating role to search engines. By definition, these cannot be imposed in any standard way. Accordingly, the standards on which some future semantic web must rely require voluntary agreements among domains, any one of which may defect for advantage. The history of real world politics demonstrates that democracies do have the capacity to agree on common standards, because they've already demonstrated it internally. There is no reason to believe that anything short of democratic domains run by the actual users who participate - a participatory democracy - can gain any authority as a source of name discipline. The most obvious example of this working in part is the Wikipedia: while it is not a good encyclopedia it certainly has standardized the GFDL corpus namespace.
 Open configuration
The increasing "virtualization" of web hosts must logically end in host boot images as the complete representation of web services: to boot from that given image, which includes any authentication initiation required, is to become or offer that web service. The popularity of peer to peer file sharing networks, which rely on simpler server networks each of which provides exactly the same services, proves that quite sophisticated services can be offered over broadband networks even by customers of ISPs paying consumer access rates. It would be reasonable to expect that to move webs and even move email services should soon be a matter of filling out a web form, more or less as easy as it is to move DNS services now.
 Sociosemantic web
authors such as John Markoff glibly repeat the dogma that "web 3.0 is the semantic web". However, any attempt to seriously discuss this tends to just repeat the "weak versus strong typing" debate from object-oriented programming languages, and the "untyped versus typed links" debate from the intranet hypertext systems of the 1980s: "Today's web relies on links that are weakly typed, and that's for a reason: politics. Semantic webs don't scale because of varying points of view and incentives to lie." - Craig Hubley, who also predicts that "semantic webs won't evolve anywhere until we have semantic parties" or factions. He adds that "when characterizing the differences between hypertext systems I don't see a neat upgrade path from one technology to another, but rather several dimensions that mirror those in real politics: from authoritarian to peer participatory democracy, from top secret to utterly public and transparent, from fully web-integrated to standalone silos, from bodily harms at risk in every transaction to just talk about nothing of any bodily potential, from highly mobile to just-sit-on-your-butt, from well funded to volunteer only, from (most important) optimizing fundamental energy and material efficiency versus funding the next con job." and lists methods "that DO scale: tumbl.es, SRO tuples, wiki, meetup, comparison shopping, auctions, prediction markets," and notes that "you have so much more help to figure out what links mean on the web, than you do for objects within an OO program" or for that matter for articles on a restricted intranet.
As a prototype he says that "Wikipedia is web 1.5 at least. Web 2.0 may just expand it to billions of pages in a few hundred languages. On that you could expect to see dozens of good task-based information architectures evolve: web 3.0. But not one web 3.0. Several... imagine the unfairness of being fired from the whole web 3.0, say by having some "global reputation"."
 The new POT
- politics including political virtues as ideals and open politics in force as rules
- ontology, typically weak ontology that describes only infrastructural capital actions
- trolling up a troll ontology to operationally describe challenges to dominant ontologies
"What happens in politics is parties: groups of people who agree that they despise each other's views slightly less violently than everyone else's and would not fight against imposition of a regime by someone from within their own group, and at least not fight to the death against imposition by a regime by another, as long as another election came soon and everyone followed some rules to limit the adversarial process to use ballots and words, not bullets. A main purpose of parties is to let believers argue out their semantics before encountering others too different from themselves (and maybe having to fight them). Or, put another way, the tents are kept far enough apart that those peeing out don't pee only on each other, and most of the pee is expended during election time so there's less left between elections when people have to approach each other's tents..." - Hubley, who predicts that semantic advocacy groups equivalent to parties, factions, will emerge to gain adherents. Wiki troll culture, for instance, may consist of such factions.
Like Clay Shirky, Hubley advocates relying on weak ontology. He notes that professors "agree on just enough basic standard terms to put in textbooks to cover the undergraduate material, and then agree not to fight too viciously in public about everything else. That's exactly what web ontologists do when they agree on stuff in RDF, OVAL, OWL, DAML+OIL, and other mostly useless toys that will be discarded in a few years at best. They're textbooks for the dullish undergrads, not so useful to solve real problems." He seeks "a peer-run system or participatory democracy" rather than assume that "a military-style hierarchy will define web semantics or that a more controlled and centralized semantic test suite is going to yield some eventual standard" or strong ontology.
Hubley notes the necessity and usefulness of confronting power structures especially those that propagate sysop vandalism. "The only real agreement I see among those doing real work on collective construction of serious knowledge bases is that "all users are trolls": stuck in a permanent structural conflict with hosts of these webs, who simply cannot step out of the way to let the users define their own semantics. They are psychologically unable usually, but also held emotionally to account and often also legally liable. The hosts feel they must be in charge."
"I do not believe the W3 people have any clue what they're doing, even their REST axioms and verbs are wrong: what purpose does DELETE serve other than to REDIRECT you to a 404? REST was a happy accident, not a plan. Wiki was another happy accident, and it has more of a future than webDAV." - Hubley, who seems to believe in trolling even basic axioms and protocols. For instance, he describes living ontology most exactly not in his own projects but as part of the semantic eGovernment semantic wiki project. There is also a good description that compares it to other ontology projects.
He also trolls "other ontologies from those who do understand real living body semantics, such as Cliff Joslyn's group at LANL. But they biased their work rather heavily towards a mechanistic view of life that starts with biology. That's a more reliable path but will take a very long time to get up from a model of my mitochondrial DNA to planning my trip to France. The real web is more about human scales of action. I suggest this won't be solved by biologists, computer scientists or any hard science geeks... it's a social problem requiring negotiation and politics as usual, and that very good social software only comes out of addressing hard social problems. Not toy problems as SIOC addresses, but those that involve supporting lateral/peer human relationships in real working problem solving with the fate of living bodies at stake."
 The new VAT
Once politics is raised, economics cannot be far behind. Adopting new technologies tends to be slow, especially those with relatively high overhead like semantic wiki.
The fastest motivation is immediate profit, which is why ad and porn data tends to spread faster than anything else. These are structured blurbs, but not granular representations of data (where XML and XPath/XQuery get it wrong, and get RDF/XML overlooked). A shared web (of data) is one of RDF sources not XML ones - the source of much confusion. Consider the difference between a semi-structured (X)HTML page containing the Fortune 400 and a structured data source that provides granular access to data about the group or each member. Thanks Kingsley Idehen.
 Sales example
When considering for instance which of these to target as customers, some model of value reporting must be consulted which would estimate which relationships have greatest potential to create value, so that those could be pursued preferentially. See capital asset model for more on this problem and its possible solutions.
Once the most valued action or best next step is known, more predictable on-the-ground actions can be pursued. This requires a command grammar including especially all human command verbs that are issued from one individual to another who defers to them. All involved parties (proper nouns) and body/subject/objects involved otherwise need names first, a good reason to adopt Wikipedia names so you have every notable possible party named and dossiered already.
Once all potential action can be described as all control verbs, describing verb/noun/types becomes much simpler: each verb operates on a fixed number of nouns, and a type implements a fixed number of verb/noun (or noun.verb in object-oriented) pairs.
Roles (also defined by Trgve Reenskaug's methodology) and factions (added by Craig Hubley and other smug pro-trolling trolls) organize the users into "types" also. These are simply the most active entities in an entity-relationship model.
The RDF Model is an open and standardized way of producing these. Even the folksonomies catalogued by del.icio.us, Flickr, Googlebase, eBay, Amazon, and eventually open tags, are just more structured data sources. RDF instance data can provide a view on these, e.g. del.icio.us tags for 'semanticweb' via a Dynamic Data Web Start Page.
 Must there be only one?
"A lot of these semantic web approaches just assume that there's a single eventual future agreement on link types. There may be, but only once there were viable competing web 3.0 approaches would it make any sense to try to define a uniform general semantics to simplify all the core tasks of that emerging economy (based on those applications). Go read some linguistics: it's a very tough problem to tie actions to words." - Hubley
 Can there be only one?
Probably not. Differences between public/shared/private usage will almost certainly continue, and these will drive ontology differences that may be specific to industry, profession or legal regime, e.g. Canadian libel seems to have already had a major effect on online debate design.
 Reactive, reflective, reflexive
Competition, especially in an adversarial process, drives users to adopt more reflective processes to avoid reactive errors, e.g. sysop vandalism or wrongful dismissals resulting from overtrust in administrators. As reflexive process evolves from total quality management in each field subject to competition, especially 24x7 global distributed operations, tolerance for non-operational language, abuse of spatial metaphor, and so on, tend to decrease.
However, reflexive goals must be stated in terms of an organism, or an ecosystem, they aren't easy to standardize in the abstract. Accordingly accredited standards tend to only apply to processes at the reflective and reflexive levels of introspection, e.g. ISO 9000, ISO 14000, can't specify in detail what the processes are nor how to inspect them, only specify the ISO 19011 auditing.
 Beyond 3.0: ultra-reflexive
Version 3.1 of the real web seems destined to be ultra-reflexive: using itself to define itself, and leveraging itself to change the real world. At that point it's out of control. But perhaps not out of controll.