Welcome!

Recurring Revenue Authors: Elizabeth White, Yeshim Deniz, Pat Romanski, Liz McMillan, Xenia von Wedel

RSS Feed Item

RE: XML Schema: "Best used with the ______ tool"

> 
> For example
> "It is possible to define collations that do not have the 
> ability to decompose a string into units suitable for 
> substring matching. An argument to a function defined in this 
> section may be a URI that identifies a collation that is able 
> to compare two strings, but that does not have the capability 
> to split the string into collation units. Such a collation 
> may cause the function to fail, or to give unexpected results 
> or it may be rejected as an unsuitable argument. The ability 
> to decompose strings into collation units is an 
> .implementation-defined. property of the collation."
> http://www.w3.org/TR/xquery-operators/#substring.functions
> 
> That the drivers for this was to help translating 
> implementations (e.g. to
> SQL) is something I thought I read in informal XQuery material, 

The fact that collation URIs are not standardized in the XQuery spec is
certainly in part because it is assumed that many implementations will run
on platforms such as Java, Windows, or Oracle that provide extensive
collation support, and that users (and implementors) will want to take
advantage of the collations available in the environment in which they are
running. There was also a feeling that if collations are going to be
standardized, it would be better to do this in a separate standard and
invoke it by reference from XQuery. Although it's unfortunate that
collations can't be specified in a fully interoperable way, I think this was
probably the right design decision - every spec needs to make a decision as
to what's in scope and what isn't, and to provide clean ways of describing
the points at which the standard has interfaces and dependencies on the
outside world.

The recognition that there are in effect two kinds of collation, those that
support substring matching and those that don't, is one aspect of this. I
don't think it was actually based on known limitations of the collations
available in any particular collation library or of the collation facility
in SQL, it was more a recognition that there are some real-life collating
sequences (for example one that places "iso 646-1" before "iso 10646-1")
where using the collation to compare arbitrary substrings doesn't make much
sense. (We also considered recognizing a third kind, which can be used for
equality comparison but not ordering, but that seemed unnecessarily
complicated).

Personally I would have preferred it if we had defined a standardized way of
requesting a collation with certain properties (for example,
language=French, ignore-case=yes, ignore-accents=no) without standardizing
the precise behaviour of the collation that is used in response to this
request. I think the WG didn't do that because there were rumours that
another WG might be doing it - these things happen.

> 
> In the formal semantics it says
> "A language aspect described in this specification as 
> implementation-defined or implementation dependent may be 
> further constrained by the specifications of a host language 
> in which XPath or XQuery is embedded."
> http://www.w3.org/TR/2007/REC-xquery-semantics-20070123/#id-normativity

Yes, for example XPath says it's implementation-defined what the default
collation is, and XSLT says that it has to be Unicode Codepoint Collation.
That's a perfectly respectable form of parameterization.
> 
> An example of such material is how functions that are based 
> on types being available should treat nodes with no schema:
>
http://www.w3.org/TR/2007/REC-xquery-semantics-20070123/#jd_aux_derives_...

You've not quite got that right. This is about how to handle nodes that have
been validated against schema definitions that weren't available at compile
time. XQuery is designed to be usable in a wide range of different
processing scenarios. In a database scenario it's quite conceivable that all
known schemas will be preregistered in the database and that the mechanisms
for compiling queries and validating source documents can ensure
consistency. In a different environment, a different approach might be
needed. So the mechanisms for ensuring consistency are left implementation
dependent, for good reasons I think. The rule here is simply saying that the
implementation might have access to schema information beyond that defined
in the language specification, and if it does, then it is allowed to make
use of it to avoid reporting spurious type errors: it isn't required to
reject a source document that was validated using a schema that wasn't
imported into the query if it knows that the schemas are consistent.
> 
> Implementation-dependent material is listed at
> 
> http://www.w3.org/TR/2007/REC-xquery-20070123/#id-impl-defined-items
> http://www.w3.org/TR/xpath-datamodel/#impl-summary
> http://www.w3.org/TR/xquery-operators/#impl-def
> 

Any specification is going to constrain some things and not others. It's
always possible to argue that the spec should impose more constraints (why
does XML not define a limit on length of names that all processors must
support?) or that it should impose fewer (why the insistence that the result
of unparsed-text() must only contain characters allowed in XML?). WGs are
always having such debates. But it would be quite wrong to suggest that
there should be no points of implementation freedom - standards thrive if
they achieve the right balance between interoperability and adaptability to
a wide range of different environments.

> > Implementing XSD is challenging, but it's certainly not 
> prohibitively expensive.
> 
> We had two programmers leave when we had them working on XML 
> Schemas internals. 

Well, I found the challenge fun, but I guess not everyone would. I don't
want to be defensive about XSD - it's a fairly horrible spec of a fairly
horrible language. Same is true on a smaller scale of the spec for URIs. I
put up with both because they are useful and because I'm pragmatic. The
horribility of the spec certainly adds to the difficulty of producing
interoperable implementations. But, by existence proof, it doesn't make it
impossible, either technically or commercially.

> > It's true that there are people who have chosen to 
> implement subsets of it
> > -
> > perhaps they feel their market only requires a subset. There were 
> > people who only implemented subsets of XSLT, and the market 
> soon made 
> > its feelings felt.
> 
> Without being argumentative or defensive, doesn't this 
> statement contradict the earlier one? The market will be cool 
> about XQuery variances, but will make its feelings known 
> about XSD? 

I don't think I made a prediction about XSD. I said that in the case of
XSLT, the market had shown a preference for products with a high level of
conformance - the fact that James Clark's xt only implemented 95% of the
spec was one of the reasons people switched to Saxon in the early days, even
though at the time it was significantly slower. Whereas with XQuery,
certainly in a database context, users seem to put conformance lower on
their requirements list.

In practice I think there are a number of different scenarios for XSD. As a
validation technology, I think people are demanding a high level of
conformance to the spec, and the mainstream processors now deliver that -
not 100%, but about 98%. (In fact, users are demanding 110% - they want
interoperability between processors in areas where the spec chose to allow
variations). But XSD is used for other things as well, notably data binding.
To a large extent data binding is outside the scope of the XSD specification
itself (XSD doesn't tell you how its types map to Java or C++) and this
probably explains why there is wide variation between products in this area.

Michael Kay
http://www.saxonica.com/

Read the original blog entry...

IoT & Smart Cities Stories
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...