Shared publicly  - 
 
Proposal: Entity-Driven Tooling

tl;dr: what if one command could scaffold out the CRUD models/views for your client and server side code, with baked in offline support. Would this help you? Would this solve a pain point of yours? Are there better ways to do this than what's described below?

The following describes a proposal the Yeoman team may consider for a future experiment for how we could improve full-stack development with offline as a first-class citizen. We would love your feedback on it.

The Problem

Not enough developers are making use of offline APIs in their sites and applications. This leads to a situation where sites and applications fail if connectivity drops, since there is no local data store capable of fulfilling requests, and unnecessary network requests are made to the server-side to populate local model representations. Additionally there is confusion about the landscape: which APIs are supported, and to what degree.

In addition to this client-side developers are often siloed from the server-side developers, despite the fact that their data entities are likely to be similar, if not the same. This can lead to inconsistencies in the way that data is stored and represented.

The Solution

We propose a fundamental change in the way applications are scaffolded. Developers define or import an existing schema from the server-side representing their data upfront and we automatically scaffold their server-side, client-side and offline/sync code for them.

Current offline web-app workflow:

* Design your model schema
* Setup a database using this
* Implement server-side CRUD
* Implement client-side CRUD
* Appcache manifest, localStorage store, FileSystem API used for offline story
* Extremely manual process. Lots of repetitive steps.
* Offline and sync are very hard to combat from a tooling perspective. Too much room for variance.

Proposed workflow:

From a usability perspective, this workflow could be:

yo crud schema.json
-> scaffolds your client-side CRUD
-> scaffolds your server-side CRUD
-> bakes in an offline/db layer for you

Note: schema.json is either imported from an existing server-side model or is generated using another tool. TBD.

* Design your model schema
* Automatically generate server-side and client-side code to handle CRUD operations
- e.g yo crud entities.json
- The code that is generated can be for any language, particularly in the case of server-side, provided there is a generator for it: Java, Ruby, Python, PHP, JavaScript
* Automatically generate backing store scaffold code, e.g. SQL statements.
- Potentially this could be automated if we have connection details to the stores, though more likely is a developer will want to do a dry run of the code, especially for updates, to ensure nothing gets broken.
* Automatically scaffold offline support
- We take care of sync
* Want to adjust your schema? No worries.
- Edit entities.json
- Re-run init

Primary benefits of this approach are:

* Built-in offline mechanisms for client-side
- A developer can call the JavaScript APIs and an offline-capable store, e.g. - IndexedDB, can be queried for the data.
- The offline storage can be populated as part of handling the response from the server-side.
- The right API can be selected to fulfill the offline requirements depending on the platform capabilities.
- The offline storage component is effectively transparent to the developer.
* Automated scaffolding of client-side and server-side code (including referential integrity and type enforcement) for:
- APIs and endpoints.
- Model generation.
- Store scaffolding
SQL table create / update statements for server-side.
WebSQL / IndexedDB for client-side
* Off-the-shelf support for best practices, e.g. server-side authentication for API endpoints

Practical vision

We have an HTML-based web app that allows you to visually create your app’s data model. From there we can scaffold out:

* Server-side API endpoints
* Server-side storage
* Client-side models and APIs
* Client-side offline storage

By doing things this way we make the model the driver for the entire stack. More specifically, although both client-side and server-side frameworks commonly offer model representations, this is more of a philosophical position where we assume a unified, decoupled model shared by both from which we can create the implementations for both.

Server-side API endpoints and storage

If we know what data you want to store we can create a RESTful API in any language: PHP, JavaScript (Node), Ruby or Java. We can ensure that all data coming in meets the contract set out by the model, both in terms of types and referential integrity. Essentially CRUD operations can be created very quickly in the server-side language of choice, and backed by the store of choice, whether that’s SQL Server, MySQL, Redis or PostgreSQL.

Client-side models and APIs

Similarly to the server-side work, we can also create a set of APIs and models for the client-side part of the stack. We will know what JavaScript is required for the models, and these can be scaffolded out using Yeoman or Grunt, and we can also seamlessly create API methods that match the server-side implementation.

Client-side offline storage
As a related benefit, since we know the data entities that our application is intending to pass around, the theory goes that we should also be able to create appropriate client-side storage for these items up front, again creating CRUD methods which can integrate with the APIs.

For example, requesting data from the API could trigger a request to a local store first, and should that fail, we would then make a network request to our generated server-side API for the live data. Adequate mechanisms for failed network requests (or timeouts) can also be built in to the API for both client and server code.

The offline storage would be transparent to the developer, and could be backed by (mobile-friendly) WebSQL, IndexedDB or localStorage, depending on the requirements.

Existing Schemas

In the event that the developer is working against an existing schema on - say - the server-side, it should be possible to derive a representative model for that schema, from which the client-side can then be built. That is, it should be possible to either import or export a schema from the tool, such that explicit buy-in from server-side developers is not required, though advantageous.

Store Emphasis

The developer should most likely be allowed to choose the emphasis as to whether the client’s data store or the server’s data store is to be considered the master. For example, the developer of a mobile site or application may choose to make the client store the master, essentially using the server-side something of a sync solution.

In other cases, for example collaborative tools, having a single source of truth may be paramount, and therefore choosing to make the server-side the master is more suitable. The generated code should allow for this distinction.


Scope

This proposal currently only aims to tackle scaffolding of CRUD operations for your server-side and client-side views from a model perspective. It does not intend to make assumptions about whether you are using shared templates or how nor where you should render your data. 

Rather, what we are talking about is that instead of spending your time creating representations of your objects, their client-side storage and their CRUD methods, you are freed up to spend your time on your application’s logic. How, when and if you choose to call the CRUD methods is still completely up to you.

Considerations

* Changes to the schema may be difficult to map, although given a diff one could theoretically update existing data (assuming a reasonable mapping between changed data types exists.)
* There are many kinds of generators needed to support all the popular back-end and front-end stores. The community could potentially assist with this task.
* Data mapping for front-end may prove very taxing in the cases where - say - localStorage is the only available storage option.
* The generated code should observe sensible defaults, for example there should be authentication support for the server-side APIs. Ideally speaking the community would own the generators and ensure, with guidance, that security, privacy and speed are core principles of all generated code.

We need your feedback

Let us know what you think of the proposal!. Would this help you? If so, how? What do you think the proposal lacks? Is there anything you would do to improve it?. Are we crazy?

Implementing this proposal is not currently on any Yeoman roadmap, but we may consider adding it if there are enough developers that would find it beneficial to their workflow.
160
70
Patrick Phalen's profile photoFerit To's profile photoKornel Novak Mergulhão's profile photoGonzalo Ferreyra Jofré's profile photo
76 comments
 
if the main language (both server and client side) will be javascript it will be useful to have a validation service shared between server and client 
 
+Antonello Pasella I don't see why it needs to be JavaScript for type validation to make sense. As in, you could have a PHP or Java server-side setup and, if you've set a data type of - say - a float, both could (just as the JS could) validate and enforce that type.
 
To make it really usable I wonder how could browser store be pruned upon update on the server (when a model entry is modified by another user while this particular device is offline).
 
+Paul Lewis I think that was intended to be read as "especially if", and not "only if". Great thing about JS on the client and server is the same validation code can be run in both places. Talk about staying DRY...
 
+Christopher Parker Yeah, great point! However, I think it's important to view this as abstractly as possible. Many server-side implementations are Ruby, PHP, Python or Java today. But yes, definitely if it's JS all the way through it'd be very beneficial :)
 
This sounds like a great idea! Totally for it, and would use any tools that help in this kind of task. Willing to help as well, just point me to a  repo (when it exists).
 
So, at this early phase we're not locking the idea into any particular stack as such - if there's enough community interest we may implement a POC using one stack and provide a spec/guide to show others how to achieve something similar using PHP/Java and so on.
 
Seems to me the hardest problem is going to be conflict resolution. If there's any chance of mutating entities in the data store when offline, then you have to resolve any conflicts with subsequent changes when you reconnect to the server. Even if you decide to do trivial, per-field, last-client-wins resolution, you still end up wanted consistent object versions (or a reliable global timestamp, but that's hard).

Consistent object version numbers can be hard to do directly in a database (with no object-frontend affinity). If you're not careful, before long you'll end up with a full OT stack, a la the Drive Realtime API.
 
> yo crud schema.json

The format of this schema.json would need a lot of thinking about. If you want to interop with statically typed languages then you need to be able to define types for the data. Other schemas that do this, like WSDL, tend to be a mess.

I guess at least it isn't XML :)
 
+Joel Webber Agreed, there's plenty about this that wouldn't be trivial. Where is the "source of truth?" is one great example. Ideally speaking the solution would iterate to such complexity over a well-defined roadmap. As Addy said, this is simply a toe-in-the-water thing to see if it makes sense to everyone. If it does then the heavy duty thinking caps will need to go on! :D
 
+Joel Webber the offline layer is undoubtably going to be the most difficult aspect of implementing this proposal. Conflict resolution has been something other sync efforts have had to work around (e.g we've been speaking to the folks behind Chrome's syncFileSystem), but we may target per-field/last-resolution first and opt for a more advanced resolution scheme later on. 
 
+Paul Lewis Makes perfect sense. I mention it because I believe conflict resolution is the really hard offline problem that goes oft-unmentioned because everyone's busy thinking about the other hard problems, relative to the norm in native apps (storage, APIs, client version skew, etc).

FWIW, most native apps completely punt on decent conflict resolution, but in a way that's completely ad hoc and difficult to generalize. I've been responsible for my own share of mistakes in offline/realtime apps, and I'm busy digging myself out of a big pit of related technical debt right now, so it's "top of mind" for me :)
 
+Mat Scales Types for the data are a legit concern. We're going to open up discussions around the format of that schema with the community if this idea ends up getting off the ground, but there may also be some flexibility for the tooling to support a more customizable schema moving forward. tl;dr: I agree :)
 
+Joel Webber Definitely. I think one of the benefits of knowing the entities up front is hopefully you can make less generalized decisions about how to store things, or allow the developer to indicate how things should play out. I'm interested in what the developer community feels about those things, because really the idea is that this is balancing their own app's specific requirements against providing a generic solution.

With all that said, this is exactly the feedback we want and need on this, because it takes a good number of sharp brains to make something like this a successful and useful tool.
 
+Addy Osmani I think that makes sense. Do be wary, though -- per-field last-man-wins can still get tricky in some cases without a consistent object version (i.e., each mutation sent to the server knows which version of the document it's related to, and the version is only bumped on the server).

You'll also want to make sure the clients see a stream of mutations (after bootstrapping the full object), not entire objects. Otherwise it gets really hard to filter out redundant changes in the UI when your own mutations get rebroadcasted to you.

Can you tell I've been wrestling with this a lot lately? :)
 
+Joel Webber :) One of the reasons we wanted to put together this proposal was that an increasing number of developers have needed to craft their own custom workarounds to address these issues. The scaffolding side of this aside, offline is hard - conflict resolution, even harder. Your insights into this problem (100% agree we need to take care with sharing mutation streams) would definitely be useful if we end up developing a solution based on the proposal.
 
+Mat Scales Well pointed out. I like that couch allows clients to know when there are going to be conflicts so that something can be done about it, either via a merge or different version. I think that's something you can do using the Change API. We need to spend more time investigating what Couch has done well and what lessons we can take from there, but we'll add it to our research list :)
 
Yeah its always better to start off simple and work up to complexity later on.
 
+Addy Osmani Glad to chat anytime -- and there are some people in Google NYC that know a thing or to about conflict resolution you might want to talk to as well.

+Mat Scales It looks like Couch produces versions for all documents that it can use for this purpose, which makes sense. I like the idea of pushing knowledge of the conflict into the data itself (so that it can be resolved by app code later), though it's not immediately clear from skimming that article whether it would be sensible to push that all the way to client code (e.g., would the client have to see N recent doc versions to see far enough back to have all versions participating in a conflict?).

Good food for thought, though. Now I'm going to try and stop hijacking this thread so everyone can discuss the other aspects of the proposal...
 
Considering validation: Not all validations can be performed on the client (for example uniqueness), and all validations must be performed by the server as you should never trust POSTed data. So the client must reflect validation errors and others (network, timeouts, etc) from the server appropriately. May be impossible in offline-mode.

This is where ie. ember-data fails (currently) horribly IMHO.

I am looking forward to your implementation. Will it be named Yeaman?
 
It sounds very ambitious, in a good way! 

In something like PHP and node, you have a ton of different frameworks (Laravel, CodeIgniter, Yii, etc.) and no one really writes from scratch anymore. PHP seems to leverage a wider variety of frameworks than the other server side languages.

So would yeoman pick a PHP(in this case) "style" or convention/framework and start on that foot?

The same question kind of applies with Node too, I guess. It probably won't just be a connect server passing static files, it would be most likely express.

I am picturing things working a lot like grunt does now, small plugins augmenting the output and functionality. Is this a good assumption?
 
You may be a little bit crazy, because it is not a easy task, but I'd love to use something like that. Even better if the first generators are for AngularJS and Tailbone (https://github.com/dataarts/tailbone) or Google Cloud Endpoints
 
+Niklas Hofer Name TBD :) As per our conflict resolution strategy, validation is something we still need to properly flesh out. Agree that not all validation can (or should) be done on the client. For the first implementation (if that happens) we would likely opt for a sane balance between validation on both the client and server but allow developers to customize this as per their needs later.
 
+James Doyle Great questions. In terms of the server side of things that's totally up for grabs. Well, it all is really.

One thought would be to provide multiple generators, i.e. several PHP generators, and certainly we should make it very easy for developers to customize the output to suit their needs. It really depends on their own specifics, and what we trying to avoid is a monolithic overly-prescriptive approach. What seems much better is settling on an entity format from which existing generators for both server and client can be used to actually do the scaffolding.

So, for example, a Ruby developer will want to use gems I would guess, so why not generate gems that she can use on the server side? No additional tools needed.

Of course there's a large amount of time and effort to build such generators, and my hope would be that the community would help out by contributing time, experience and expertise. There's much to consider: validation, conflict resolution, but I think we can do it :)
 
It's definitely an interesting idea and it may have a nice side effect: if you have a schema and a basic persistence implementation, this can also be used as a skeleton for multiple persistence backends, each one of them with a single spec to adhere to, which would help in separating concerns completely between client and server.
 
This idea sounds excellent but very ambitious. Just getting the framework right would be a big project and then you need people to actually create the generators.

Ultimately, though, all of that effort will still have to be spent if people want to persist data from front end to local to back end, so why not have everyone club together and do it really well just one time?

So +1 on further developing the idea. I'm interested in turning up If you have any more public discussions about this project. Going to go away think about this some more!
 
+Doug Berringer We're in regular contact with +Brian Ford from Angular and would love to hear more about the scaffolding gem. We'll spend some time looking into it.

+Alexandre Rosenfeld Those are very valid concerns. Scaffolding certainly won't solve everything nor do we think it's going to be the right solution for everyone. There are going to be applications that may end up so complex this just doesn't cut the bread. We expect that. For those apps however where avoiding writing even more boilerplate does offer genuine time-savings (and you don't have to massively move away from what is created), we think it could be useful. Also keep in mind, what this proposal offers more than the scaffolding alone is the offline layer.
 
Awesome you're thinking about this...all this is desperately needed for offline JS.

About code generation, important not to go too OTT as JS is already a dynamic language; so specifically it should work like ActiveRecord and ensure everything is backed by a generic hash, with dynamically generated accessors (not static code generation). So yes it's nice for application code to say car.door() instead of car.attribs.door or car.door...but only if there's no "door = function()" sitting there in generated code.

And I agree with +Joel Webber, one of the big challenges here is concurrency; it's really too simple in the absence thereof, because a user couldn't even reliably use the app from 2 separate tabs. You probably want versioning to stand on the shoulders of HTTP freshness indicators at some level (ETag and last-modified). Which some APIs, including Google Plus, actually integrate into their JSON responses (as well as their HTTP headers).

About the language wars, Node/JS first is a no-brainer initially. Putting on my Completely Biased hat, I'd say Rails is a great follow-up due to its convention-over-configuration principle (and popularity). I believe +Yehuda Katz is aiming to standardise JSON formats coming out of Rails (or Rails-API), so it's a good fit.
 
This sounds very interesting... I have been thinking about something like this. 
 
Would this be a sort of ORM like entity in dot NET but would handle offline mode? 
 
+Brian Carver The implementation is entirely up for discussion. What we're really looking at is if by declaring entities up front we can a) provide CRUD APIs, b) ensure type restrictions, etc are met c) bake in offline support and d) support the vast array of languages and libraries. How that gets done is step 2 ;D
 
Nice. I think it would be very helpful then. 
 
+Michael Mahemoff We have yet to discuss what stack the initial implementation (might take but I would personally favor Node/JS just because they work well together and we could probably borrow some of the work that went into ExpressStack. Agree that we need to be careful not to go OTT :)
 
Really great idea and something I am looking into at the moment, but in a more opinionated and simple fashion (Backbone for the frontend and Symfony as the backend). I'm interested in helping out developing the infrastructure needed for Symfony.
 
+Daniel Anderson Tiecher That's great :) I think for me one of the key points is its agnosticism towards the actual implementation. The generators ideally should be able to cater for multiple languages on the server side, and different libraries / frameworks on the client side. But we'll see :D
 
+Paul Lewis Sure thing, a broader involvement by the community would ideally create a solid foundation for the really hard problems like offline support, conflict resolution, schema definition, etc. while enabling minor ecosystems to build upon this work and augment it to work towards their way of doing things. Oh, content negotiation, unit and functional testing, server side caching (etags, anyone?) and other topics could fit into the research as well. :)
 
JS sucks for anything but client. This problem should already be solved if the client is using test first best practices.
 
+John Anderson This isn't about any particular language, nor about testing strategies. Rather, it's about the fact that developers on server and client sides spend a long time scaffolding out the same kinds of things (classes, objects, APIs, storage) for every project and whether this can be alleviated. Additionally if we can bake in offline storage on the client side then we will hopefully start seeing offline-ready apps by default :)
 
Offline is an important feature for any app that wants to move into the '90s.
 
I don't need scaffolding. I need a good library that handles the synchronization of data between client (using IndexedDB) and server (node.js).
 
+Paul Lewis My point about test driven dev....no reliance on server side....server could die a fiery death....local interface should still be available. Test first dev should have self contained local interface.
 
+John Anderson Grails is server-side, and I believe that right now server-side scaffolding solutions are more advanced than client-side ones. My hope is that we can still leverage the existing server-side generators (and build new ones for non-Java implementations that don't have any) as well as creating good generators for client-side. It's about unification of the whole stack, not just client or server, and automating the extremely dull tasks of offline and scaffolding for both.
 
+John Anderson Absolutely, and that's something we could look to make a part of this, so that's great feedback. And if we can bake in offline then a local store could in theory respond when the server is unavailable (assuming the developer is happy with that being the case). There are many use-cases to work through :)
 
Common generation of client and server scaffolding from a single interface definition? It's the reinvention of SOAP. And DCOM. And CORBA. Maybe even more.

The true nature of a good RESTful web interface is that you don't need to share complex API descriptions; the interface is self-describing and possible to discover through exploration. Yes, this means some effort duplication, but it also means that the interface can be used without that specialised API descriptor. The interface is the descriptor.
 
+Addy Osmani This is a great proposal and I think a good start would be to take a look at existing ways offline is backed into sync.

PouchDB is a great way to sync data to the server, with offline storage. This sample application (http://nparashuram.com/conference ) stores data offline and sync when the connection is restored.
There are also Backbone offline adapters that store offline, and can be augmented to sync when online again.
 
This sounds like a massive undertaking. Don't forget the lessons learned on the road to Yeoman 1.0. I'd break this idea up into smaller features (generating a REST API, abstracting offline/sync, etc) that can be used independently and much sooner. 
 
+Dave Geddes Absolutely. The implementation needs plenty of discussion, assuming we are all of the opinion it's a good idea. Also, yes, decoupling and modularising is extremely important :)
 
Yup, that would definitely be awesome.
 
+Paul Fazzino breezejs is a very interesting resource/entity first client side development system, completely agreed, but it's also very important to realize that what powers it is one of the most titanous and massive of resourceful server side back ends, odata. i'm massively in favor of what breezejs has constructed, how integrated and consistent development looks to be there, but what it builds on the front end seems a direct result of the back end.

i don't know what kind of api coverage the front end breezejs has versus the odata specification, but that question sticks out in my mind: how much power is required to make that entity-oriented front side development environment?
 
+Donal Fellows what specifications are you using to have your endpoints describe themselves? how do we explore and enumerate possible interfaces? are you talking about something extending RFC 5785, lrdd, uri-templates? if i have a bot, how does it do this exploration of what endpoints are available?
 
+Morgaine Fowle The “endpoint” is just a URL and there's no real concept of an interface (in a Java/C# sense). You could make the description of the master resource contain (or link to) assertions about some interfaces that might pertain, but they'd be really non-normative. I suppose one key would be to ensure that links are always clearly that (I like the href attribute from the XLink spec) and you could use semantic annotations (RDF, OWL) to do the interface description. Or even link to WADL or WSDL. (There are an embarrassment of ways to describe REST interfaces, in large part because REST doesn't mandate one way.)

With the proper HATEOAS REST model, everything is linked and all clients have to do is follow links (or go back in their history; they're allowed to remember URLs even if not to synthesize them). This is a very different philosophical approach to the classic interface-driven approach. Mind you, I write my REST webapps interface-first, but the thing I'm doing is writing a web site that happens to be the interface to an application, rather than putting a web-ish interface in front of the app. Which sounds like a circular argument; sorry about that. Perhaps it is better to say that the real interface is defined by the HTTP operations and content types transferred, and not by the programming language used to create that interface.
 
Not sure if it was mentioned already but there is a project called SailsJS that aims to help developers create REST API's with Node.js and Express that I think could be integrated into Yeoman. SailsJS project page: http://balderdashy.github.com/sails/
 
I think offline should be separated out from the scaffolding of client and serverside CRUD. Offline is a tough nut to crack, as others have said, and I don't know that it's really all that necessary. You're either at home, where you have a persistent connection, or you're on your phone/tablet that have cell connection. The places where you need offline are getting harder to find... 

I've worked on a desktop client-server application for the past 6+ years that does exactly this. Not only are conflicts a problem, but memory and disk footprint of the client is a problem. We're actually moving to a web app online-only model in large part because we don't get enough value of out of the offline capabilities for the pain they cause (both to the user and to our development team).
 
+Nate Finch Offline requires specifically designing the app to support it; if the app requires a common shared state to make any sense at all, it just won't work offline.
 
+Guario Rodriguez Sails.js could work as one part of the solution here- an option for the online/server-based use case to support a RESTful API for entities over HTTP and Socket.io.  I'm in full support if you guys want to go that direction.  Sails has some capabilities for serving templates, but it is definitely going to continue to be focused on serving easy APIs.
 
+Mike McNeil how does Sails.js couple with front end frameworks like Ember.js? Since we are moving into a period where backend frameworks are becoming platform agnostic (aka what you seem to be trying to do) These two seem to couple quite well.
 
Sails.js is just an MVC framework for Node.js-- it just happens to provide a RESTful JSON API out of the box with whatever database adapter you choose.  Your controllers also support WebSockets-- does this by mocking up Express request and response objects which actually map to socket.io.
 
+Paul Lewis I means that is simple to have the same validation functions working exactly in the same manner if we have both servers side and clientside javascript. Having a clientside validation synced with a serverside validation in many languages (such as PHP, Java, .NET etc) will be a pain in the ass :D
 
Don't know if this is the place, but I had a few issues running the latest version of Yeoman 1.0 on Mac OSX 1.8.3 (ie : PhantomsJs not being found on grunt run and unable to install anything via bower).
As it took me some time to figure out the whole thing, I thought I'd share my finds here :
 
For PhantomJS issue, simply type the following in your terminal : 
cd node_modules/grunt-mocha/node_modules/grunt-lib-phantomjs
npm install phantomjs
cd node_modules/phantomjs/tmp
rm -R phantomjs-1.8.2-macosx
unzip phantomjs-1.8.2-macosx.zip 
cp -R phantomjs-1.8.2-macosx ../lib/phantom

For Bower issue : 
export PATH=/usr/local/git/bin:$PATH

Hope it helps !!! And thanks for this awesome WorkFlow ! Keep up the great work !
 
Since we are working on a web based visual modelling tool for #MongoDB, this sounds like a great project to me! Is there any way I can follow the progress / help out?
We currently deal with code generation based on Mongoose schemas, which are JSON documents in fact and can be converted to / from IETF JSON schema documents.
 
I have a love/hate relationship with sync...

Would CouchDB's eventual consistency model be suitable for sync?  When changes are sent to a server, a version parameter could be sent.  The server could then choose whether or not it uses it to check for conflicts.

If a conflict is detected, it could respond with a suitable HTTP status code so the client could know, then at that layer Yeoman/AngularJS/whatever could trigger a 'confict:sync' event so applications can deal with conflicts according to their own doman-specific requirements.  Some applications will want to present a UI widget for resolving conflicts (like old-skool iSync), others might say 'it's cool, just save a copy of the old version and overwrite with this one.' (which is what CouchDB does...)
 
Awesome :O

--
Justin Lonas
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
 
+Mike McNeil Think I may use this as the basis for my personal portfolio/blog :) Thanks. 
 
+Mike McNeil it will definitely take me time to wrap my head around this new way of thinking. I come from a rails background and JS, at least for me, was mostly front-end and the framework itself dealt with the assets folder. 

It seems, at least from what I can tell, we are moving towards restful API's that don't have to deal with the views/layouts and leave it to front-end MVC's am I correct in this assumption?
 
Exactly-- Sails still supports classic erb-style views, but most people (including my team) are using it for single-page apps, or for apps where client-side templates are at least a part
 
Well I'm very late...but I think this is a great idea.  While it would be nice to also generate the server side models and do validation a good offline sync solution for POJOs is what is REALLY MISSING out there...something that could be dropped in place of Angular's $resource and $http (not tied to Angular though) and just work...people already have working apps that they need to add offline capability to and it would be awesome in itself to give them something they can use without changing the server side.
Add a comment...