Proposal: Entity-Driven Tooling

tl;dr: what if one command could scaffold out the CRUD models/views for your client and server side code, with baked in offline support. Would this help you? Would this solve a pain point of yours? Are there better ways to do this than what's described below?

The following describes a proposal the Yeoman team may consider for a future experiment for how we could improve full-stack development with offline as a first-class citizen. We would love your feedback on it.

The Problem

Not enough developers are making use of offline APIs in their sites and applications. This leads to a situation where sites and applications fail if connectivity drops, since there is no local data store capable of fulfilling requests, and unnecessary network requests are made to the server-side to populate local model representations. Additionally there is confusion about the landscape: which APIs are supported, and to what degree.

In addition to this client-side developers are often siloed from the server-side developers, despite the fact that their data entities are likely to be similar, if not the same. This can lead to inconsistencies in the way that data is stored and represented.

The Solution

We propose a fundamental change in the way applications are scaffolded. Developers define or import an existing schema from the server-side representing their data upfront and we automatically scaffold their server-side, client-side and offline/sync code for them.

Current offline web-app workflow:

* Design your model schema
* Setup a database using this
* Implement server-side CRUD
* Implement client-side CRUD
* Appcache manifest, localStorage store, FileSystem API used for offline story
* Extremely manual process. Lots of repetitive steps.
* Offline and sync are very hard to combat from a tooling perspective. Too much room for variance.

Proposed workflow:

From a usability perspective, this workflow could be:

yo crud schema.json
-> scaffolds your client-side CRUD
-> scaffolds your server-side CRUD
-> bakes in an offline/db layer for you

Note: schema.json is either imported from an existing server-side model or is generated using another tool. TBD.

* Design your model schema
* Automatically generate server-side and client-side code to handle CRUD operations
- e.g yo crud entities.json
- The code that is generated can be for any language, particularly in the case of server-side, provided there is a generator for it: Java, Ruby, Python, PHP, JavaScript
* Automatically generate backing store scaffold code, e.g. SQL statements.
- Potentially this could be automated if we have connection details to the stores, though more likely is a developer will want to do a dry run of the code, especially for updates, to ensure nothing gets broken.
* Automatically scaffold offline support
- We take care of sync
* Want to adjust your schema? No worries.
- Edit entities.json
- Re-run init

Primary benefits of this approach are:

* Built-in offline mechanisms for client-side
- A developer can call the JavaScript APIs and an offline-capable store, e.g. - IndexedDB, can be queried for the data.
- The offline storage can be populated as part of handling the response from the server-side.
- The right API can be selected to fulfill the offline requirements depending on the platform capabilities.
- The offline storage component is effectively transparent to the developer.
* Automated scaffolding of client-side and server-side code (including referential integrity and type enforcement) for:
- APIs and endpoints.
- Model generation.
- Store scaffolding
SQL table create / update statements for server-side.
WebSQL / IndexedDB for client-side
* Off-the-shelf support for best practices, e.g. server-side authentication for API endpoints

Practical vision

We have an HTML-based web app that allows you to visually create your app’s data model. From there we can scaffold out:

* Server-side API endpoints
* Server-side storage
* Client-side models and APIs
* Client-side offline storage

By doing things this way we make the model the driver for the entire stack. More specifically, although both client-side and server-side frameworks commonly offer model representations, this is more of a philosophical position where we assume a unified, decoupled model shared by both from which we can create the implementations for both.

Server-side API endpoints and storage

If we know what data you want to store we can create a RESTful API in any language: PHP, JavaScript (Node), Ruby or Java. We can ensure that all data coming in meets the contract set out by the model, both in terms of types and referential integrity. Essentially CRUD operations can be created very quickly in the server-side language of choice, and backed by the store of choice, whether that’s SQL Server, MySQL, Redis or PostgreSQL.

Client-side models and APIs

Similarly to the server-side work, we can also create a set of APIs and models for the client-side part of the stack. We will know what JavaScript is required for the models, and these can be scaffolded out using Yeoman or Grunt, and we can also seamlessly create API methods that match the server-side implementation.

Client-side offline storage
As a related benefit, since we know the data entities that our application is intending to pass around, the theory goes that we should also be able to create appropriate client-side storage for these items up front, again creating CRUD methods which can integrate with the APIs.

For example, requesting data from the API could trigger a request to a local store first, and should that fail, we would then make a network request to our generated server-side API for the live data. Adequate mechanisms for failed network requests (or timeouts) can also be built in to the API for both client and server code.

The offline storage would be transparent to the developer, and could be backed by (mobile-friendly) WebSQL, IndexedDB or localStorage, depending on the requirements.

Existing Schemas

In the event that the developer is working against an existing schema on - say - the server-side, it should be possible to derive a representative model for that schema, from which the client-side can then be built. That is, it should be possible to either import or export a schema from the tool, such that explicit buy-in from server-side developers is not required, though advantageous.

Store Emphasis

The developer should most likely be allowed to choose the emphasis as to whether the client’s data store or the server’s data store is to be considered the master. For example, the developer of a mobile site or application may choose to make the client store the master, essentially using the server-side something of a sync solution.

In other cases, for example collaborative tools, having a single source of truth may be paramount, and therefore choosing to make the server-side the master is more suitable. The generated code should allow for this distinction.


This proposal currently only aims to tackle scaffolding of CRUD operations for your server-side and client-side views from a model perspective. It does not intend to make assumptions about whether you are using shared templates or how nor where you should render your data. 

Rather, what we are talking about is that instead of spending your time creating representations of your objects, their client-side storage and their CRUD methods, you are freed up to spend your time on your application’s logic. How, when and if you choose to call the CRUD methods is still completely up to you.


* Changes to the schema may be difficult to map, although given a diff one could theoretically update existing data (assuming a reasonable mapping between changed data types exists.)
* There are many kinds of generators needed to support all the popular back-end and front-end stores. The community could potentially assist with this task.
* Data mapping for front-end may prove very taxing in the cases where - say - localStorage is the only available storage option.
* The generated code should observe sensible defaults, for example there should be authentication support for the server-side APIs. Ideally speaking the community would own the generators and ensure, with guidance, that security, privacy and speed are core principles of all generated code.

We need your feedback

Let us know what you think of the proposal!. Would this help you? If so, how? What do you think the proposal lacks? Is there anything you would do to improve it?. Are we crazy?

Implementing this proposal is not currently on any Yeoman roadmap, but we may consider adding it if there are enough developers that would find it beneficial to their workflow.
Shared publiclyView activity