Profile

Scrapbook photo 1
Scrapbook photo 2
26 followers|5,286 views
AboutPostsPhotosVideos

Stream

Seki

Shared publicly  - 
 
progress report 

tl;dr : three steps forward, two steps back

https://dannyayers.wordpress.com/2013/10/20/seki-update
1
Add a comment...

Seki

Shared publicly  - 
 
[there's a better-formatted copy of this with more links here: http://dannyayers.com/2013/05/28/User-Management-via-RDF ]
I'm now calling my Seki project an application framework, and a primary use case is a Content Management System (I've just about got the core read/write functionality for this working). Such a thing clearly needs access control. This in turn needs user management, authorization and authentication. I've done quite a lot of reading around these aspects, and recently did some work on Stanbol's user management component, so am feeling well informed enough to start implementing. I'm taking a very iterative approach to dev, starting simple and gradually getting more sophisticated. Get it working first, get it right later. Seki is centred around RESTful HTTP and RDF, and here are some of my design choices in this context.

A definition reminder from Wikipedia : The process of authorization is distinct from that of authentication. Whereas authentication is the process of verifying that "you are who you say you are", authorization is the process of verifying that "you are permitted to do what you are trying to do". Authorization thus presupposes authentication.

Authentication

Ultimately I'd like Seki to support a variety of authorization methods, but following the 'get it working first' I'm beginning with HTTP Basic. As it stands this is pretty insecure, but later I'll probably handle it over HTTPS which tightens things up (when I asked around, Hixie recommended Basic over HTTPS as a good approach). The next step after that will be WebId. So -

Accepted dev path :

HTTP Basic : simple, standard
HTTP Basic + HTTPS : simple, more secure
WebId : based on FOAF+SSL, this offers a secure and very user-friendly way to login. FOAF is RDF, so that ties in nicely with Seki's tech.

Rejected (for now) :

Cookies : rather ugly in architectural terms
HTTP Digest : rather ugly in implementation terms
OAuth : is complicated, has security holes and doesn't necessarily offer interop

[I've looked at a few other approaches, but the above are the most notable of the also-rans]

Authorization

Essentially the requirements here are to look up the authenticated user in a database and see what resources they can access. Seki is build on RDF, and the database is a triplestore (any store which has a SPARQL 1.1 endpoint), and the model for authorization based around an RDF vocabulary. The inherent Webbiness of RDF having URIs at it's heart lends it to use in a RESTful system.

In general best practices suggest reusing existing vocabs wherever possible to maximise interop. However for experimentation and rapid development it's perfectly reasonable to invent a totally new vocabulary, replacing or aligning terms later. In the case of authorization, much of the data will be specific to the local system (and needs to be securely protected) and so there isn't all that much potential for reuse. Hence I've put together a custom vocab, to what extent it's worth connecting to other vocabs remains to be seen.

There is some prior art around the vocab, most of it descended from the W3C ACL work. SIOC has quite a few relevant terms, and Stanbol uses the vocab +Reto Bachmann-Gmür designed for Clerezza. I've borrowed from each of these, and have been informed a fair bit by work  +Thomas Bergwinkl has  done on the issue.

The simplest approach would be to start with users modelled as individual persons (as foaf:Person) or more generally as agents (foaf:Agent). But FOAF notes that a person/agent is distinct from an account on a site and hence adds indirection, Agent -> account -> OnlineAccount (this is also reflected in SIOC). It occurred to me that when working from the system's point of view, it's probably easier to think about users than user accounts, so I've got the term User. Say my login name is 'danja', then the corresponding resource on the system will be e.g. http://hyperdata.org/users/danja. But I have got a term for the indirection: User -> owner -> Agent.

A term I like from SIOC is Space, and I've loosely copied this to mean a set of resources. What I'm thinking is that a Space can be defined by a SPARQL SELECT query or a URI template.

Here's my draft vocab : usermanagement.ttl

user management vocab structure

bigger image

Access Control Implementation

Ok, now for the Seki subsystem that will tie all this together. All the data is accessed via SPARQL 1.1, so in use, server-side a typical process would be having values such as 'login' (the user account name), 'accessMode' (Read, Write etc), 'spaceType' and 'spaceDefinition' (to list the relevant resources) being inserted into a predefined template, producing a SPARQL ASK query, with the result saying whether or not the given user can apply the requested operation on the given resource.

User Management

The plan here is to provide a series of forms for adding users (for individual users and administrators), associating permissions to users and roles, associating roles with users, defining spaces and so on. Server-side again this will be done through templated queries.

Client-side, in the first instance I'm planning on using jQuery/jQueryUI components and Backbone.js to RESTfully access resources corresponding to users, roles etc. Initially I'll just use regular form fields for the messages for POSTing these. I've not looked closely at this yet, but I hope to lean towards using JSON/RDF messages wherever appropriate.

Regarding API, like authentication I'd ultimately like Seki to support a variety of approaches: RESTful HTML form POSTs etc,  GET/POST/PUTing of Turtle and JSON graphs, endpoint-style access and so on.
1
Norman Gray's profile photoSeki's profile photo
3 comments
 
I did this before WebID appeared, so there's no explicit link there at present, but it's very much the same sort of idea.

The idea was that RDF+reasoning meant that the AuthZ could be 'opportunistic', in the sense that it could repurpose information which wasn't originally intended to be used for access-control (there are doubtless some security Issues here, but this seemed useful for some purposes).  For example, it could use the fact that your X.509 certificate -- in principle intended only for AuthN -- was signed by a .ac.uk root to decide that you were in UK academia, and thus in Europe, and thus have access to a Europe-only dataset.

Or (this was clearly feasible, but never implemented) it could conclude that if you were trusted enough to borrow books from your university library (based on an LDAP lookup), then, whoever you were, that was good enough for a PEP to decide you were trustable enough to have a go on its compute cluster.

This project's been gnawing at me, so if there's anyway it (or indeed I) can have a run around the block, just ping me.
Add a comment...

Seki

Shared publicly  - 
 
Merged the dev branch back into main. I don't think it's changed functionally and there didn't seem to be any point in keeping the distinction.

I've just been playing with the code a bit. Some refactoring and a couple of minor bugfixes. I started looking at how I'd go about supporting "application/json" after seeing the stuff around here:
https://plus.google.com/u/0/112609322932428633493/posts/BrsM6ygiDfc

I've not worked out exactly what the JSON structures should look like yet - it'll be JSON-LD in some form or other. But considering it has suggested how I can modularise things by using a separate handler per media type. There's a JSONHandler skeleton in there, when I have chance I'll move the HTML handling out to the same kind of thing. I thing I need to replace the current RDF handling with a simple proxy to the remote (Fuseki) store for now, come back to it once I'm a bit clearer about auth.

Need to update the todo list asap.
1
Add a comment...

Seki

Shared publicly  - 
 
Towards Declarative Testing

Although this project is officially on temporary pause, I'm still actually working on it.

Thing is, very soon with Seki I need to put in place comprehensive functional tests. Those tests will all be of a very similar shape, essentially acting acting as HTTP clients firing off messages to Seki and seeing what happens. Now I could (as folks generally do) code these up directly into a fixture.

PS. I knew about other RDF-based tests (e.g. for the RDF specs) but only just discovered one had been set up for SPARQL 1.1. Not looked into it yet...
http://www.w3.org/2009/sparql/docs/tests/

My own experience of creating test fixtures is that they tend to become something comparable to spaghetti code (pasta spirals?). Naming is all over the place and it's hard to tell what's actually been covered and what hasn't. Ok, I could layer on some kind of coverage testing. But as the tests are all of a similar shape, it occurred to me that I could describe them declaratively and have a simple little engine read the descriptions and fire them off. So what I'm looking at is something like this:
https://github.com/danja/seki/blob/dev/src/tests/notes.txt

Viewed from the right angle, that looks very much like a processing pipeline, with the parameters for the processing units being determined declaratively. As it happens, I've been looking at exactly such a thing in relation to another project, Web Beeps. By abstracting out the processors and their parameters, I was able to optimize the system using a genetic algorithm. I very much doubt I'll need anything like a GA for testing, but the same kind of abstraction offers transparent management and systematic control of the tests.

I used an ad hoc representation for the system configuration in Web Beeps (see "Some Parameters", halfway down http://webbeep.it/implementation.html ), but right away it was obvious that RDF would be a nice fit for this stuff. In fact I had already made a start on refactoring in this direction.

Once I have a vocabulary for processing pipelines that works with Web Beeps (essentially allowing DSP systems to be defined declaratively), it'll be more than capable of describing what I need for Seki's tests.

It could be argued that expressing things like HTTP request calls in RDF is way too granular and the whole idea of doing this declaratively is overengineering. But the purpose of Seki is primarily to make it easy to work with exactly this kind of data - it doesn't matter what the application of the data is. Ultimately it's tests will be one of it's applications. All the time I'm messing around with this stuff I'm learning about what's needed.

So I've just been doing a bit more coding towards the pipeline representation in RDF. I'd decided that the best way to generate the descriptions was from the code itself. To minimize dependencies I've been generating them as simple strings, expressing the data in Turtle. Usually it's a bad idea to generate stuff like this as strings, but in this case I reckon it's reasonable - each class only needs to look after it's own description, so it works out as a couple of lines of string concatenation possibly with a helper method call or two. It'll be manageable because it's so decomposed. Also I'm constraining the description bits to a small interface so there's less to go wrong. (I've split this off as a separate subproject - see https://github.com/danja/dork ).

Having said that, adding such stuff to existing code isn't entirely trivial. Right now I've got a fairly complex Web Beeps encoding/decoding chain set up to describe itself (starting at https://github.com/danja/WebBeep/blob/master/src/org/hyperdata/beeps/CodecTest.java ). So far it's generating 664 lines of Turtle (it's not particularly concise Turtle, mostly NTriples). But slap bang in the middle of that is the string null. Oops.
https://github.com/danja/WebBeep/blob/master/beepy.ttl
But I needed to trawl through the code at some point anyhow to add comments, now is a good time as any. I might even think about what unit tests it should have...
1
Add a comment...

Seki

Shared publicly  - 
 
got a handful of followers so added to a Circle, all their your posts should show up over here...maybe, I've not really figured out how pages work yet...
 
Quick notes
I've other stuff pressing so will have to pause here for a few days. Bit of a brain dump of things I don't want to forget.

The Name
Seki is so named because I started by working against Fuseki (an Apache Jena sub-project), and Seki is another term from the game of Go picked at random. It turns out that it's sometimes translated as "mutual life", which is rather apt as the aim is to operate at the intersection between the Web of Documents (HTML) and the Web of Data (RDF).

Agents
Some years ago I came up with a generic model for agents on the Web, described here: https://secure.flickr.com/photos/danja/6788357883/ I want to keep this in mind as Seki develops, and I just (re)discovered I'd already considered it from a Seki-like angle: https://secure.flickr.com/photos/danja/5558883807/ - a key point there is support for communication with other agents, i.e. interop with other services and federation.

Dogfood
As soon as possible I want to start using Seki for things I currently use other tools for: my blog, Wiki, note-taking, bookmarking, project management etc.

Research
Although one of my main motivations for Seki is for it to be a testbed for experimentation, I want to keep the core of Seki as "standard" as possible and follow known good practices. I think the best way I can square this is to ensure every novel feature that gets added supports graceful degradation.
A case in point is affordance stuff. I want objects in the data to carry some information about how they should be rendered, what interactions they support etc. Right now I'm thinking of including Javascript in the RDF. But I should also also provide a sane fallback where Javascript isn't supported.

Server/Browser Balance
1. A minimum on the server side I reckon is exposing the RDF as linked data, providing (at least) RDF and HTML representations of all (public) resources at individual deref'able URLs. In practice this will probably be a subset of the Link Data API. [I'll call this *Linked Data Lite*]
2. A minimum in a browser is link-based navigation and representation editing through HTML forms.
These two together is what I've got in mind for the first live "app", getting I Ching material served as linked data.

But, naturally I want each of these to ultimately be pretty sophisticated, as well as providing support for machine-oriented services. I'm hoping that by using Javascript as the lingua franca will allow some portability of code between server and client and hence sharing of the workload. i.e. fancy renderings may be worked out server-side, or raw data and scripts given to the browser, let that work it out.

Right now I'm not sure of the best approach to passing data to the client - raw RDF (and include a js API) or JSON (and use some existing js model framework). But I'll cross that bridge when I come to it.

probably more notes to follow before the day's out

https://en.wikipedia.org/wiki/Go_terms#Seki
1
Add a comment...
In their circles
6 people
Have them in circles
26 people
Joel Bender's profile photo
Dominik Tomaszuk's profile photo

Seki

Shared publicly  - 
 
there's a better formatted version of this (with links that work) on my blog :

Seki Web Application Framework : Summer Update

tl;dr for a first concrete application, I've been building Seki up as a CMS. For basic functionality it's now getting close to being usable.
Call for Funding
So I've had a little burst of activity on Seki in the past few weeks. A few blind alleys, but significant forward progress. Overall it's evolving, and the ideas have been crystalizing a little, so I'll lay them out a bit here. I only intended writing a brief update, but then... At some later date I'll probably pull out the bits that aren't Seki-specific, there's quite a bit that I suppose would come under the umbrella of Web app best practices.
I'm now considering Seki a Web Application Framework with a long list of potential applications. They range from fairly generic things like e.g. workflow/issue management or info aggregator to more vertical things like a creative writing assistant or recording studio equipment inventory system. As well as scratching a few of my own itches, I'd like to use Seki as an experimental testbed, kicking the tyres of the relevant technologies. There are plenty of WAFs around, several (like Seki) built on node.js. Seki's Unique Selling Proposition is essentially it's Semantic Web foundation, though as far as I'm aware the full decoupling from the datastore is also unique. 
I need to break down the potential applications list pretty soon to help with deciding priorities, but some components are obviously common to many apps, providing top priorities. A minimum viable product (in the sense of something I can usefully make live) drops out of this - a simple CMS.
The components are:
System Admin
Content Management
User Management
Social Connectivity
At a system level, there's also the overarching need to support configuration and extensibility/plugability. I'll bundle these under System Admin. 
Content Management
I'll start with this because it's a well-known area. 
Requirements
A CMS requirements checklist can get very long. But the core functionality is about creating, organizing and publishing documents. So the system needs front-end tools for editing content, storage that allows structuring and indexing (with associated user tools) and different ways of presenting the documents. A fairly minimal system would support blogging: create/edit posts (and comments), post discovery (categories, search, related resources etc.) and at least views for single posts and a front page for recent posts. A blogging engine suits me as a relatively short-term goal, I'm a big believer in eating your own dogfood, and as soon as viable I'll start using Seki for my own blog. 
Most CMSs are (unsurprisingly) highly focussed on HTML. It is after all the primary format on the Web. However, both HTTP and RDF (Seki's data model) are format-agnostic, and the conceptual model of RDF covers any kind of data, not just documents. 
Not unrelated, HTML isn't the only fruit. There are an awful lot more media types out there. Any reasonable CMS will be able to deal appropriately with things like images, audio and video files, but when we're talking about more generalized hypermedia (with an emphasis on hyperdata, Seki's core domain) it's necessary to keep an eye on HATEOS. (though in a broader sense than Roy frames it, with the H standing for Hypermedia rather than just Hypertext). As Mike Amundsen has put it: "Hypermedia Types are MIME media types that contain native hyper-linking semantics that induce application flow. For example, HTML is a hypermedia type; XML is not." (See also : Exploring Hypermedia with Mike Amundsen). In June 2013, after the Web's been around for more than 20 years, it's remarkable that you can pretty much count the media types that support links on one hand (including knuckles). The most widespread ones are probably HTML, Atom, SVG, RDF (all formats) and one or two proprietary systems like Flash and PDF. Even then, to qualify for Roy's HATEOS the type also needs human/agent interface(s) that support the hyperlink affordance. That narrows the list even more - e.g. the only RDF format that has standardized affordances is RDFa (it sits on HTML).
And there's more. A common failing amongst current CMSs is their suboptimal use of HTTP. For example, it's common to find a common endpoint that's used for POSTing content and URLs of the form e.g. http://example.com/index.php?p=423. These may be translated in use by mod_rewrite or whatever to "Pretty Permalinks" like http://example.com/2003/05/23/my-cheese-sandwich. Seki will provide these pieces of functionality somewhat more natively: a resource will be called e.g.   http://example.com/2003/05/23/my-cheese-sandwich in the back-end store, it can be created/update/deleted using HTTP (GET/PUT/DELETE) requests directly on that URL. In other words, more RESTfully. This is also in line with some of the core parts of the Linked Data API. It's not a priority, but ultimately,for convenience, I'd like Seki to support more of that, particularly referencing resources using URI Templates.
Incidentally, I just came across the OASIS Content Management Interoperability Services spec. It does use Atom/APP as a base for one of it's bindings, but most of it is a big vocabulary of XML terms, and I reckon chances are the Interoperability will 99% be limited to other systems that speak CMIS. Haven't said it for a while, but I reckon it's called for here : should've used RDF.
Seki Design
The aim is to reflect the requirements above in Seki, so it could be seen as rather than Content Management System, a kind of Data Management System. There's still a huge amount of Blue Ocean around the Web of Data, and one aim I have with Seki is to use it as an experimental platform, exploring some new waters.
A key feature of Seki is that the backend database is a (SPARQL 1.1 capable) RDF triplestore. Like pretty much any other CMS, some way of mapping between the document format and DB schema is needed. Templating of one form or another is the usual approach, which ideally will be done in a way that helps keep separate the view aspect from model and control. Using an RDF store offers huge advantages over a traditional SQL RDBMS for Web applications in terms of flexibility, in that schemas don't have to be predefined. RDF triplestores also trump both SQL and most NoSQL DBs in terms of Web-friendliness because the URL (more generally, URI/URIRef/IRI) is treated as a first-class type, so everything can be intimately linked with the HTTP protocol.
There are loads of alternatives for templating engine, and Seki uses freemarker.js. Two main factors led to this choice: I've used it before (the original Java version, within Apache Stanbol) and it's simple (essentially just content replacement and includes, simple conditional display and lists). Many templating engines resemble full-blown programming languages (cf. PHP), I'd suggest that makes it too easy to overload the templating with extra functionality, breaking the separation of concerns.
To allow consistency and DRY, the same templating subsystem is used in Seki for formatting between the browser and application and between the application and store (SPARQL queries/updates are templated).
Regarding media types, having a triplestore backend means that the RDF formats are a no-brainer. For Web presentation, HTML is essential. While application/x-www-form-urlencoded is the default media type for HTML form POSTing, it doesn't really lend itself to reuse. A more versatile choice here is JSON. However, in itself, JSON doesn't have a notion of links. But JSON-LD does, with the bonus that it can also be considered a fully-fledged RDF format. Things like images, audio and video can all be handled in their own media types via REST. Atom (and Atom Protocol) are desirable for additional connectivity, see Social Connectivity below. WebDAV support would be nice to have, to allow the use of other clients, but it's not really a priority.
So in the near term I want Seki to be able to handle (in both directions):
HTML
JSON-LD
Turtle 
Arbitrary media formats
Progress
Server-side : While HTML stays the main rendering format, I've started augmenting this with a bit of RDFa. This is easy with the templating system.
I've also been changing things to make JSON-LD the primary format for passing info to the server. There's still quite a way to go with this, mostly a matter of more general refactoring and code reorganization. I've got a libs at hand that help with this, notably node-rdf and json-ld (I held off incorporating an RDF lib into Seki as long as I could as for most things it would be overkill, but when it came to format conversion it seemed essential). 
I've done a little coding towards 'baking' pages (turning them into static files), the original motivation being performance, though it does seem a generally a good idea, as Aaron has described. It's since occurred to me that this should be mighty handy for archiving and versioning. Simply bake older versions together with relevant metadata at a URI discoverable from the latest version of the page.
Client-side : I've run through a few editing UIs - 
simple form textarea - functional but ugly/limited
TinyMCE - was pretty nice (and the RDFaCE RDFa + auto-annotation extension is very cool) but at the time I was running a separate HTTP server (in the Seki node.js code) on a different port to serve static files, and I ran into painful XSS issues (I've since reverted to serving static files on the same port, but later want to offer an external static server as an option)
Aloha - has a neat UI, but it's surprisingly big, and when I tried customizing it (with CSS etc. and Backbone.js) it seemed more trouble that it was worth. Around this point I remembered Create.js - a bit of embarrasing memory loss as I did a keynote at a workshop in Salzburg featuring it last year).
Create.js (see demo) - this is more or less a wrapper for an editor which makes it operate as a VIE component. It supports other editors (including Aloha) but the default is Hallo, my favourite editor to date.
I wanted to use jQuery and jQueryUI components in Seki's views, move to JSON-LD for client -> server messages and use Backbone.js to logically structure the messages. This is pretty much how Create.js operates, plus RDFa to indicate editable areas etc (a slight caveat is that it follows an earlier version of the JSON-LD spec, but it only took a few lines of code to massage messages into shape). 
The general design approach of Create.js/VIE components is really good, they support a good degree of system decoupling.
Right now I've got Create.js basically integrated, though it still needs a fair bit of tweaking.
Next Steps
tweaking config of Create.js
making views for blog front page, archive links etc.
adding comment facility
tests
refactoring
docs
What I've not got around to yet are the finer points of HTTP, notably the bits needed for caching (ETag & Last-Modified). But basic support for these is relatively straightforward.
User Management
I recently covered the requirements and design of this part of Seki in User Management via RDF. [Oops, I just overwrote the image over there with the latest version - oh well, it is the latest version...]
Progress
I've since tweaked the vocab a little to improve modelling of Spaces (sets of resources) and added direct linkage to FOAF profiles. 
<img>
As it happens, the jQueryUI demo for a modal form dialogue happens to be the Users part of a user management UI, giving me a good starting point there. 
I've put together a little bit of code to create a few default RDF graphs in the triplestore on startup (with the --init command-line option), including default instance data for admin and anonymous users in a graph (http://hyperdata.org/users).
Additional I've set up a basic user registration form. For this (and similar forms) I plan to use the VIE form generator component. Unfortunately this currently only supports the JSON model used by schema.rdfs.org. I've started a little bit of utility code to transform RDF vocabs into this format.
Next Steps
make admin UI to edit Users, Roles and Permissions
make views for individual resources in the users graph (Users, Roles and Permissions)
make ASK queries to check user's permissions for a given resource
hook into auth chain, providing appropriate representation rendering
Social Connectivity
Requirements
The general direction of this is towards the blogosphere, Facebook, Google+, Twitter, Delicious, Flickr, Pinterest etc. Hooking into the blogosphere can in part be achieved by providing a syndication feed and a built-in aggregator. Adding knowledge of FOAF allows 'Friending'-like functionality. And all this can fit in with a distributed Web environment.
I have no intention of implementing anything as complex as Facebook, and am wary of walled gardens, but then again there's no reason to block such a thing as a potential future application. Per-user spaces (like individual blogs, G+ pages or Twitter streams) and threaded discussions are certainly within scope. 
When it comes to things like Facebook's games and apps - it's a nice idea, however cheesy they tend to be. See pluggability below.
For linking into existing social net accounts, that can be achieved in some cases via relatively straightforward APIs, otherwise more complex apps that plug into the host system are potential options.
Seki Design
Blogosphere connectivity is fairly high up the priority list, mostly because I want to use it myself. Atom feeds are the first thing, and a built-in aggregator should be relatively straightforward later on (I've built several in the past, having a triplestore backend is really handy). There is a bit of trade-off judgement required with things like this. While snagging things like one person's bookmarks from Delicious can be relatively lightweight, it won't be desirable to mirror the whole of the blogosphere locally. I've not looked with an eye to hooking into Seki yet, but there are plenty of summarising and indexing/metadata-extracting tools out there, storing a little bunch of triples for pages of interest would be good.
Spidering of FOAF (and similar material) is on the list, some care (maybe limit to 1 hop) will be needed not to frighten the horses by appearing to invade privacy.
For the fun of it I might well add a Jabber chat client/server. Might even look into audio/video conferencing out of curiosity. Not really a priority right now though.
Progress
All I've tried along these lines is grabbing & converting a dump of my Delicious bookmark data, putting that in the store to experiment against.
Next Steps
Atom feed(s)
FOAF profile builder (a la FOAF-a-matic)
feed aggregation
FOAF/RDF spidering
System Admin
Here I'll also include bits that didn't quite fit in the sections above. Most is Seki-specific so I'll roll the following together :
Requirements & Seki Design
System configuration : Some of the config is dependent on the host system and target SPARQL 1.1 store, those pieces must be configurable externally. Within Seki there's the need to load config info on startup and (ideally) allow its modification. At runtime there's a lot of potential for configuration and even system extension using declarative definitions in the RDF.
Data management : In the most general case, RDF editing can change any data. While crude, the query forms that are generally bundled with SPARQL servers (like Fuseki, the one used by default in Seki) offer this facility. A tool something like the (now stale) Tabulator generic data browser/editor would be very nice to have. 
It's hoped that VIE will help with application-specific data management. 
Pluggability : to act as a framework, it must be possible to plug in functional blocks or modules. Initially this will be limited to CMS-like themes (packaged up custom templates, bootstrap data and CSS etc). 
Going further, a very powerful paradigm I'd like Seki to support in the near future is in-place runtime code modification (and "hot code updates"), roughly along the lines of Smalltalk (e.g. Squeak) or emacs, though using Javascript. I think basic implementation will be relatively straightforward, but the vulnerabilities this would potentially expose are likely to make the security aspects (sandboxing) hard work. The plan is to disable the feature by default, enabling it by a startup option to allow experimentation.
Drawing back a little, blocks of functionality could be implemented by building them as (quasi-) self-contained services on HTTP, accessed via a RESTful API. Hooks could be inserted into the code at appropriate points at runtime to allow calling such services. I think it would be best to view such services as little agents (I did some work around this a few years back, which I presented at a Scripting for the Semantic Web meetup at ESWC2007 - must dig out the slides). If the (HTTP) messages are suitably self-descriptive, it should be straightforward to set up a simple Flow-Based Programming DSL, maybe layered on async. (See also: NoFlo). Of course, such services/agents could be completely external to the system. 
Progress
I've put together a bit of code to create default graphs and populate them on startup (with the --init command-line option does bootstrapping).
By default Seki will serve static files on the same port as dynamically generated pages. It's now sing my own hacky server code. This only has minimal features so far (compared to say node-static), though for a bit of extra flexibility and to allow reuse I've implemented it as Connect middleware. My justification for DIY is that I want to know what's going on internally, better to provide hyperdata/hypermedia support.
Next Steps
integrate ACE source editor (just for HTML & Turtle for now)
experiment with VIE components, in particular the form generator
graph creation & data file loading form
set up Stanbol as an external service, dynamically hook Seki into it
figure out theme packaging definition & installation
explore Tabulator-like tools, consider implementing something similar
refactor the whole of Seki, setup up to allow npm packaging
tests & docs
live demo
There are a bunch of other idea bits and pieces floating around, especially using Seki to actually build custom applications. But I think the above covers most of the priorities for a good while. They may well change as I go along.
Call for Funding
My recent burst of activity was done in time when I really should have been doing work that would pay the bills. So if anyone has the ability to fund this or knows of any approaches I could try to funding, please let me know. While the dev path for Seki itself is pretty well determined for the near future, I will happily be influenced by $$$. In particular, if you have an idea for a web app that isn't already out there, fund me a bit (every little helps) and I'll see what I can do about implementing it. The core Seki framework will always be open source, but I wouldn't object to building closed source applications on top.
If you're tempted by this but want to wait until you've seen a demo, let me know and I'll ping you when I've got something live.
Seki lives on Github and has a G+ page (with some older docs). There are also rough TODO notes & refs. on Workflowy, and a handful of posts tagged 'Seki' on my blog. I'm @danja, danny.ayers@gmail.com
1
Add a comment...

Seki

Shared publicly  - 
 
I've got back into coding on Seki, I'll post a proper update in a few days. In lieu of that, here's a note re. naming/versioning:

https://plus.google.com/112609322932428633493/posts/BQWZwCM1gTD 
1
Add a comment...

Seki

Shared publicly  - 
 
I want to have a live server running asap, but also have dev one locally. Later I'll move to simply having a common (Fuseki) backend, but for now, running separate triplestores I wanted to be able to pass whole datasets between the two.

At some point I should set up simple dump/load to quads but I've wound up starting a variation, saving individual graphs to the filesystem as separate files. This more or less corresponds to 'baking' the RDF.

Part of the motivation is to be able to be able to point to places on my filesystem (or the Web) and have it crawl them, gobbling up the triples and effectively mirroring what's there. This is pretty much 'unbake'.

Today I have hooked up a little UI (admin.html) and so far have that firing off a query to list the graph URIs in the store, as a start for 'bake'. Also started writing the docs for this (under www/docs).

http://www.aaronsw.com/weblog/000404
1
Bernard Tremblay's profile photo
 
We're none of us better off having the whole static/dynamic thing mooted. Kind of like plug&play ... convenience as a vector for stupification.

I remember wrassling this issue with friends ... summer/fall of 1999 ... building IndyMedia ... great fun. Great way of being truly sociable.

p.s. also: none of us are better off for the fact of having projects / pages like this fall silent. best wishes, Mr. Ayers.
Add a comment...

Seki

Shared publicly  - 
 
Update : Listy Thing, Testing, Auth, IKS integration

I've not touched the core code recently, mostly been planning ahead. Next steps are fairly straightforward (refactoring/reorganising mostly, templating tweaks etc) but I wanted to be clear where I was going further ahead before attacking the code itself.

TODO list at : https://workflowy.com/shared/dd5976b2-b48f-9096-0357-105f34b4d6ed/

Listy Thing
One of the dogfood apps I have in mind for Seki is an RDF-enhanced near-clone of Workflowy. I've been using Workflowy (see TODO list above) but have discovered there's a limit of 500 monthly updates, so that's pushed this up the priorities a bit.
Workflowy keeps all its data in scary JSON (even in the browser), but does allow text-based export. That's basically space-indents for tree structure. So I wrote a parser for their format, which outputs XHTML lists. I also made a start on a html2turtle XSLT for this, though need to make a decision on the RDF model for lists, probably:

[ lists:parent <#Tea>; lists:item <#Black> ; lists:position "1" ]

B.2 here :
https://plus.google.com/112609322932428633493/posts/18V3pmUFkeS
Related source at:
https://github.com/danja/seki/tree/dev/src/www/lists/workflowy

Testing
After remembering the DAWG tests I pinged +Andy Seaborne (on the Jena list), he gave me some pointers re. Fuseki & tests
http://mail-archives.apache.org/mod_mbox/incubator-jena-users/201203.mbox/raw/%3C4F660FD7.2090102%40apache.org%3E
notably:
arq.qtest --earl manifest.ttl
I need to read up around that a bit. My current plan is to build a mini-agent for running generic actions (HTTP client methods + comparisons mostly) defined in an extension of the DAWG test vocab.

Auth
+Thomas Bergwinkl and +Dominik Tomaszuk gave me some good pointers over here:
https://plus.google.com/u/0/112609322932428633493/posts/hmhNDJLgAeY
stuff linked from:

http://www.w3.org/community/rww/wiki/AccessControl
Right now I'm thinking the best bet will be to go with a variation of Bergi's vocab using SPIN (SPARQL in RDF) for the filters:
http://spinrdf.org/spin.html
http://spinrdf.org/sp.html
- together with something like Thomas' modeling of roles.

After I've got some of the refactoring out of the way I think I'll put things in place to support HTTP Basic authentication, so I've got something to experiment with authorization against.

PS. D'oh! Forgot the notes I wanted to put in from this morning.
I've created a service description for Fuseki that includes two persistent datasets (Public & Private) along with one in-memory dataset (Temp). Not yet tested.

Private - ACL stuff and system config
Public - the main store

What I'm thinking of trying is when a user logs in, a CONSTRUCT query, derived from the Private store, is used to create a graph in Temp. That graph will be deleted when they log out. Any user interactions will be validated against the Temp graph. In a sense the Temp graph will be like a session, though I do want to minimise the non-RESTful bits.

IKS Integration
I've got a keynote to do at:
http://wiki.iks-project.eu/index.php/Workshops/Salzburg2012
- so some time before then I need to play with their semantic CMS kit. It all seems very pluggable, and I've got my blog data to work on. +Reto Bachmann-Gmür 's encouraging me to use it with Gradino 2, his reworking of my hacky blog engine to use Apache Clerezza. But it's all RDF-backed, and key bits of the IKS stuff are accessible through Javascript so I should be able to reuse anything that comes out of that alongside Seki.
1
Add a comment...

Seki

Shared publicly  - 
 
Quick notes
I've other stuff pressing so will have to pause here for a few days. Bit of a brain dump of things I don't want to forget.

The Name
Seki is so named because I started by working against Fuseki (an Apache Jena sub-project), and Seki is another term from the game of Go picked at random. It turns out that it's sometimes translated as "mutual life", which is rather apt as the aim is to operate at the intersection between the Web of Documents (HTML) and the Web of Data (RDF).

Agents
Some years ago I came up with a generic model for agents on the Web, described here: https://secure.flickr.com/photos/danja/6788357883/ I want to keep this in mind as Seki develops, and I just (re)discovered I'd already considered it from a Seki-like angle: https://secure.flickr.com/photos/danja/5558883807/ - a key point there is support for communication with other agents, i.e. interop with other services and federation.

Dogfood
As soon as possible I want to start using Seki for things I currently use other tools for: my blog, Wiki, note-taking, bookmarking, project management etc.

Research
Although one of my main motivations for Seki is for it to be a testbed for experimentation, I want to keep the core of Seki as "standard" as possible and follow known good practices. I think the best way I can square this is to ensure every novel feature that gets added supports graceful degradation.
A case in point is affordance stuff. I want objects in the data to carry some information about how they should be rendered, what interactions they support etc. Right now I'm thinking of including Javascript in the RDF. But I should also also provide a sane fallback where Javascript isn't supported.

Server/Browser Balance
1. A minimum on the server side I reckon is exposing the RDF as linked data, providing (at least) RDF and HTML representations of all (public) resources at individual deref'able URLs. In practice this will probably be a subset of the Link Data API. [I'll call this *Linked Data Lite*]
2. A minimum in a browser is link-based navigation and representation editing through HTML forms.
These two together is what I've got in mind for the first live "app", getting I Ching material served as linked data.

But, naturally I want each of these to ultimately be pretty sophisticated, as well as providing support for machine-oriented services. I'm hoping that by using Javascript as the lingua franca will allow some portability of code between server and client and hence sharing of the workload. i.e. fancy renderings may be worked out server-side, or raw data and scripts given to the browser, let that work it out.

Right now I'm not sure of the best approach to passing data to the client - raw RDF (and include a js API) or JSON (and use some existing js model framework). But I'll cross that bridge when I come to it.

I've added a pic from Shelley Powers to the profile here, thought it was kinda appropriate:
https://lh5.googleusercontent.com/-0LwRk4tsqv0/T18LLLCgJzI/AAAAAAAAAB8/3Dhl1mXlow0/s320/extrapolate.jpg

probably more notes to follow before the day's out

https://en.wikipedia.org/wiki/Go_terms#Seki
1
1
Add a comment...
People
In their circles
6 people
Have them in circles
26 people
Joel Bender's profile photo
Dominik Tomaszuk's profile photo
Communities
Created by Seki
View all
Story
Tagline
This is a blog about Seki. Seki is a front-end to an independent SPARQL server using node.js.
Introduction
Seki is a piece of Semantic Web middleware based around the SPARQL protocol and query language. It's being built in node.js by Danny Ayers (G+).

Contact Information
Contact info
Email