Profile cover photo
Profile photo
Sarven Capadisli
http://csarven.ca/#i is me. Follow the URI rabbit.
http://csarven.ca/#i is me. Follow the URI rabbit.
About
Sarven's posts

Post has attachment

Post has shared content
We are witnessing a complex social hacking attempt of a standards organisations to remove useful public key cryptography from the browsers. You can see how this is happening by the number of misleading arguments provided, references that float in the air, pointers to not yet deployed specs, refusal to argue clearly and openly the case, discussions splintered around groups with partial arguments referencing decisions made in other groups as reasons for the action in a circular way.... Here is Tim Berners-Lee post to the technical architecture group at the W3C on the subject. https://lists.w3.org/Archives/Public/www-tag/2015Sep/0000.html

#security   #TLS   #browsers   #Web  +Electronic Frontier Foundation  #html5  

Post has shared content
2015 is bound to be a great year for LOV. In March the project will be four years old, and by then we should have delivered a brand new version and interface, of which the LOV-Search has been the prefiguration for a while now. Be patient, +Pierre-Yves Vandenbussche is working hard on it, I will no deprive him of the pleasure to announce it in due time. 

Something I would really wish to see next year is a real community effort to improve the multilingual aspect of vocabularies in LOV, and that's why I open today this discussion category Multilingualism - Translation. Although the community of vocabulary publishers and users is largely multillingual, an overwhelming majority of vocabularies are still published with labels in a single language, most of the time English. Some of them (more than 40) don't even care to indicate at all the language of their labels and comments, including famous ones such as FOAF, Music ontology, Event ontology and Time ontology. This is of course a very bad practice, blatantly ignoring the diversity of languages.
Out of 469 vocabularies (as of today) in LOV, 419 use explicitly English for labels/comments, and 83 use other languages, among which the leading ones are French (42 vocabs), Spanish (28), German (21), Italian (20), Japanese (12), Portuguese (8), Dutch (7), Russian (6), Czech (6), Greek (5), Polish (5) and Chinese (5). 32 more languages are used by less than 3 vocabularies each. 
LinkedGeoData ontology, thanks to its large contributor community is using more than 40 languages, making it the undisputed champion of multilingualism in LOV ... but unfortunately this vocabulary seems to have been offline for quite a while, and has never met other LOV publication requirements, such as being retrievable from its namespace. The other massively multilingual vocabulary is DBpedia ontology, which comes in 25 languages, and hopefully more to come, there again thanks to crowdsourcing and various linguistic instances of DBpedia (125 to-date). 
But less known vocabularies, even with single publishers, have made an important effort in providing multilingual labels, such as the Military Ontology, providing labels in 17 languages. Recent vocabularies in W3C namespace such as Core organization ontology have also made a noticeable effort of translation, and we know that +Phil Archer  is particulary keen on those issues, given in particular his involvment in European instititutions, making him aware that "translation is the language of Europe".

All those efforts are really far from what we could dream of : a really multilingual vocabulary ecosystem. Multilingualism is important at least for two reasons. The first and most obvious one is allowing users to search, query and navigate vocabularies in their native language. The second one I would stress is that translating is a process through which the quality of a vocabulary can only improve. Looking at a vocabulary through the eyes of other languages and identifying the difficulties of translation helps to better outline the initial concepts and if necessary refine or revise them. Hence multilingualism and translation should be native, built-in features of any vocabulary construction, not a marginal task. And if not, vocabulary users and re-users should be willing, and able to, collaborate to translation in their own language.

How can we improve the current situation?
- As vocabulary creators and publishers, be bold of our natural languages, and provide labels and comments in those languages along with the "default" english ones as part of the default creation and publication effort. This is not a huge effort compared to the overall time needed to develop a vocabulary/ontology, and as said above, it is likely to improve the vocabulary quality even in its original language.
- Develop services on top of LOV database and API allowing collaborative translation for existing vocabularies. 

+Google+ #UX The "recommendation engine" sucks:

Get the latest from your favorites on Google+
See what your favorite musicians, writers, athletes and entertainers are sharing publicly.

Looking at the list you've provided, I am confident in saying that, a random recommender would have been more appropriate.

Post has shared content
#ConFoo on February 18-20. The call for papers is now open. Submit and rate proposals. http://ow.ly/AFXua

Post has attachment

Post has attachment

+Google+ #UX Your #design consumes too much #energy . Consider reducing its footprint. If it has to come down to using Google+ or conserving my laptop's battery, guess what?

Ctrl-F4

Post has attachment
#LinkedData #LinkedResearch #OpenScience #DIY  

Call for Linked Research
http://csarven.ca/call-for-linked-research

Purpose: To encourage the "do it yourself" behaviour for sharing and reusing research knowledge.

Scientists and researchers who work in Web Science have to follow the rules that are set by the publisher; researchers need to have read and reuse access to other researchers work, and adopt archaic desktop-native publishing workflows. Publishers try to remain as the middleman for society’s knowledge acquisition.

Nowadays, there is more machine-friendly data and documentation made available by the public sector than the Linked Data research community. The general public asks for open and machine-friendly data, and they are following up. Web research publishing on the other hand, is stuck on one ★ (star) Linked Data deployment scheme. The community has difficulty eating its own dogfood for research publication, and fails to deliver its share of the "promise".

There is a social problem. Not a technical one. If you think that there is something fundamentally wrong with this picture, want to voice yourself, and willing to continue to contribute to the Semantic Web vision, then please consider the following before you write about your research:

Linked Research: Do It Yourself

1. Publish your research and findings at a Web space that you control.

2. Publish your progress and work following the Linked Data design principles. Create a URI for everything that is of some value to you and may be to others e.g., hypothesis, workflow steps, variables, provenance, results etc.

3. Reuse and link to other researchers URIs of value, so nothing goes to waste or reinvented without good reason.

4. Provide screen and print stylesheets, so that it is legible on screen devices and can be printed to paper or output to desktop-native document formats. Create a copy of a view for the research community to fulfil organisational requirements.

5. Announce your work publicly so that people and machines can discover it.

6. Have an open comment system policy for your document so that any person (or even machines) can give feedback.

7. Help and encourage others to do the same.

There is no central authority to make a judgement on the value of your contributions. You do not need anyone’s permission to share your work, you can do it yourself, meanwhile others can learn and give feedback.

Post has attachment
#LinkedData #SemStats #Statistics  

Making sense of Linked Statistical Data:

Research problem: Why do machines have difficulty in revealing meaningful correlations or establishing non-coincidental connection between variables in statistical datasets? Put another way: How can machines uncover interesting correlations?

http://csarven.ca/sense-of-lsd-analysis
Wait while more posts are being loaded