Welcome!

Recurring Revenue Authors: Yeshim Deniz, Zakia Bouachraoui, Elizabeth White, Pat Romanski, Xenia von Wedel

RSS Feed Item

RE: XML Schema: "Best used with the ______ tool"

> 
> For example
> "It is possible to define collations that do not have the 
> ability to decompose a string into units suitable for 
> substring matching. An argument to a function defined in this 
> section may be a URI that identifies a collation that is able 
> to compare two strings, but that does not have the capability 
> to split the string into collation units. Such a collation 
> may cause the function to fail, or to give unexpected results 
> or it may be rejected as an unsuitable argument. The ability 
> to decompose strings into collation units is an 
> .implementation-defined. property of the collation."
> http://www.w3.org/TR/xquery-operators/#substring.functions
> 
> That the drivers for this was to help translating 
> implementations (e.g. to
> SQL) is something I thought I read in informal XQuery material, 

The fact that collation URIs are not standardized in the XQuery spec is
certainly in part because it is assumed that many implementations will run
on platforms such as Java, Windows, or Oracle that provide extensive
collation support, and that users (and implementors) will want to take
advantage of the collations available in the environment in which they are
running. There was also a feeling that if collations are going to be
standardized, it would be better to do this in a separate standard and
invoke it by reference from XQuery. Although it's unfortunate that
collations can't be specified in a fully interoperable way, I think this was
probably the right design decision - every spec needs to make a decision as
to what's in scope and what isn't, and to provide clean ways of describing
the points at which the standard has interfaces and dependencies on the
outside world.

The recognition that there are in effect two kinds of collation, those that
support substring matching and those that don't, is one aspect of this. I
don't think it was actually based on known limitations of the collations
available in any particular collation library or of the collation facility
in SQL, it was more a recognition that there are some real-life collating
sequences (for example one that places "iso 646-1" before "iso 10646-1")
where using the collation to compare arbitrary substrings doesn't make much
sense. (We also considered recognizing a third kind, which can be used for
equality comparison but not ordering, but that seemed unnecessarily
complicated).

Personally I would have preferred it if we had defined a standardized way of
requesting a collation with certain properties (for example,
language=French, ignore-case=yes, ignore-accents=no) without standardizing
the precise behaviour of the collation that is used in response to this
request. I think the WG didn't do that because there were rumours that
another WG might be doing it - these things happen.

> 
> In the formal semantics it says
> "A language aspect described in this specification as 
> implementation-defined or implementation dependent may be 
> further constrained by the specifications of a host language 
> in which XPath or XQuery is embedded."
> http://www.w3.org/TR/2007/REC-xquery-semantics-20070123/#id-normativity

Yes, for example XPath says it's implementation-defined what the default
collation is, and XSLT says that it has to be Unicode Codepoint Collation.
That's a perfectly respectable form of parameterization.
> 
> An example of such material is how functions that are based 
> on types being available should treat nodes with no schema:
>
http://www.w3.org/TR/2007/REC-xquery-semantics-20070123/#jd_aux_derives_...

You've not quite got that right. This is about how to handle nodes that have
been validated against schema definitions that weren't available at compile
time. XQuery is designed to be usable in a wide range of different
processing scenarios. In a database scenario it's quite conceivable that all
known schemas will be preregistered in the database and that the mechanisms
for compiling queries and validating source documents can ensure
consistency. In a different environment, a different approach might be
needed. So the mechanisms for ensuring consistency are left implementation
dependent, for good reasons I think. The rule here is simply saying that the
implementation might have access to schema information beyond that defined
in the language specification, and if it does, then it is allowed to make
use of it to avoid reporting spurious type errors: it isn't required to
reject a source document that was validated using a schema that wasn't
imported into the query if it knows that the schemas are consistent.
> 
> Implementation-dependent material is listed at
> 
> http://www.w3.org/TR/2007/REC-xquery-20070123/#id-impl-defined-items
> http://www.w3.org/TR/xpath-datamodel/#impl-summary
> http://www.w3.org/TR/xquery-operators/#impl-def
> 

Any specification is going to constrain some things and not others. It's
always possible to argue that the spec should impose more constraints (why
does XML not define a limit on length of names that all processors must
support?) or that it should impose fewer (why the insistence that the result
of unparsed-text() must only contain characters allowed in XML?). WGs are
always having such debates. But it would be quite wrong to suggest that
there should be no points of implementation freedom - standards thrive if
they achieve the right balance between interoperability and adaptability to
a wide range of different environments.

> > Implementing XSD is challenging, but it's certainly not 
> prohibitively expensive.
> 
> We had two programmers leave when we had them working on XML 
> Schemas internals. 

Well, I found the challenge fun, but I guess not everyone would. I don't
want to be defensive about XSD - it's a fairly horrible spec of a fairly
horrible language. Same is true on a smaller scale of the spec for URIs. I
put up with both because they are useful and because I'm pragmatic. The
horribility of the spec certainly adds to the difficulty of producing
interoperable implementations. But, by existence proof, it doesn't make it
impossible, either technically or commercially.

> > It's true that there are people who have chosen to 
> implement subsets of it
> > -
> > perhaps they feel their market only requires a subset. There were 
> > people who only implemented subsets of XSLT, and the market 
> soon made 
> > its feelings felt.
> 
> Without being argumentative or defensive, doesn't this 
> statement contradict the earlier one? The market will be cool 
> about XQuery variances, but will make its feelings known 
> about XSD? 

I don't think I made a prediction about XSD. I said that in the case of
XSLT, the market had shown a preference for products with a high level of
conformance - the fact that James Clark's xt only implemented 95% of the
spec was one of the reasons people switched to Saxon in the early days, even
though at the time it was significantly slower. Whereas with XQuery,
certainly in a database context, users seem to put conformance lower on
their requirements list.

In practice I think there are a number of different scenarios for XSD. As a
validation technology, I think people are demanding a high level of
conformance to the spec, and the mainstream processors now deliver that -
not 100%, but about 98%. (In fact, users are demanding 110% - they want
interoperability between processors in areas where the spec chose to allow
variations). But XSD is used for other things as well, notably data binding.
To a large extent data binding is outside the scope of the XSD specification
itself (XSD doesn't tell you how its types map to Java or C++) and this
probably explains why there is wide variation between products in this area.

Michael Kay
http://www.saxonica.com/

Read the original blog entry...

IoT & Smart Cities Stories
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Early Bird Registration Discount Expires on August 31, 2018 Conference Registration Link ▸ HERE. Pick from all 200 sessions in all 10 tracks, plus 22 Keynotes & General Sessions! Lunch is served two days. EXPIRES AUGUST 31, 2018. Ticket prices: ($1,295-Aug 31) ($1,495-Oct 31) ($1,995-Nov 12) ($2,500-Walk-in)
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.