Profile cover photo
Profile photo
Daniel Friesen
Web Developer, Anime fan, Wiki enthusiast
Web Developer, Anime fan, Wiki enthusiast


How far has Feedly gotten with making it possible to merge Feedly accounts?

This request has been around for several years and all the responses I've seen have been that Feedly knows about the request and it'll take time to do right.

I have a Google Apps user I really wish I could delete but I can't because it's used for my main Feedly and I can't merge that with the user I use on my phone.

Post has attachment
A little css technique I used recently.
#css   #css3  
Add a comment...

Post has attachment
A method of getting iOS to display a numeric keypad on number inputs that improves the user experience everywhere.
  #html5 #mobile #mobiledevelopment #ios  
Add a comment...

Post has attachment
From a few days ago. Adding CC-0 to most code snippets on my blog.
Add a comment...

Why did Google decide to drop the descriptions in links that are shared?
Add a comment...

I'm surprised that so far no cloud provider or VPS provider I've seen has tried or offered this possibility:
- Create or use some form of expanding distributed storage where all the storage across multiple machines is combined into one single massive file storage area (gluster, ceph, etc... style). And allow for virtual compartments to be created to denote individual filesystems (just an id, acls, etc... it would merely identify what file belongs to what filesystem, there would be no restricted physical block storage).
- Compile the driver for this filesystem into the kernels for each of the OS that is provided.
- Change the fstab, bootloaders, etc... so that filesystems are mounted from this virtual storage instead of standard block devices.
Then entire virtual machines would be running entirely off of this virtual storage.

The provider could de-duplicate stuff saving them piles of space. Base OS files would have almost no extra overhead as they would be almost entirely de-duplicated. Individual files would be de-duplicated as well. Providers would not need to provision huge virtual block storage devices when machines only use small parts of it. And clients could benefit from the ability to set filesystem compartment caps such as 10GB, 30GB, 10TB, etc... while only paying for 3GB when their virtual machines only use 3GB of storage.

Going further this makes me imagine the idea of being able to create applications that run in individual mini-instances. An OS template is created by creating a template machine using a base OS that has had various things needed for the environment setup already, ssh-ing in and then configuring whatever requirements you need installed for your app. Then you finalize it resulting in the machine shutting down and the OS filesystem being registered inside the big storage as readonly storage. Then you register readonly filesystems for storage of application files, temporary filesystems, shared writable filesystems for things like uploads, etc... and the mount points for each of them. After putting your application files into the application filesystems (and perhaps setting up a provisioning system like puppet) you go into production. In production virtual instances are setup where the OS filesystem is mounted from the virtual storage, other storages such as application storage are mounted, the machine is provisioned, and starts running the application. Many of these may be launched in a cloud way. Providing trivial load balancing, etc... and by accessing the original application filesystem using some protocol (webdav, ftps, sftp, etc...) the application files can be changed and the machines re-launched. Or a dev/test environment could even be launched by automatically duplicating the application files, etc... into a new set of compartments then spinning up one machine that will change with the application files live. Then a new version could be launched in production. And systems can be upgraded by pressing a button that opens up a machine with the OS template. Which you can then upgrade, etc... And then press a button to have the same process for the original creation run. Then all the individual production VMs replaced with one running then new base system.
Sort of like some of the stuff Vagrant does, but for production. Or like what Heroku or AppEngine do. But usable for absolutely any application in any language.
Add a comment...

Now that I think about it... no-one has created a really good and friendly truly open backup tool... well, at least not one enabling easy use as well as an economy of helpful offsite backup providers.

We have a bunch of flexible (or in some ways non-flexible) low-level command line backup tools. But they are not user friendly. And do not make off-site backups easy. At the very least the comercial providers we'd use for   backups do not compete over backup specific stuff. So storage costs will never be driven down.

We have a bunch of simple open-source backup GUIs. But while they are user friendly they typically encourage off-site backup even less than the cli ones.

And we have bunches of closed source backup providers offering various assortments of friendly GUIs, client side encryption, slow incremental backups over time, etc... But while these do encourage off-site backup, using them ties you in to one provider leaving you to rely on their ability to survive. This lock-in also does not provide a good environment for competition to ensure fair pricing of backup storage.

What would be really nice would be a new multi-platform open-source backup tool client (+GUI) (and perhaps a server protocol too; to support the ease of use that simplifying the configuration to a dedicated protocol and setup would provide).
* The client would allow you to configure not one, but multiple destinations for your backups.
* And it would be in charge of trickling data to all of these destinations continually.
* ...or pushing to them much faster in bursts if said destination is something like a USB drive that is not always plugged in.
* It would have full support for client side encryption of the data. (probably encrypting the key itself via password so restoration is possible)
* It would also be in charge of maintaining differential backups of the same file and expiring stuff as well.
* Perhaps the local client could also handle de-duplication.
* To balance between optimized backup and ease of restoration from the backup. Perhaps the primary type of backup destination would be a custom storage type; Which would allow for client side encryption. Differential storage. Storage of optional extra metadata such as permissions and owner without the need of support on the underlying storage. Splitting data for large files into multiple files (say, to support large file backup onto old FAT USB drives for maximum platform compatibility). etc... And as a secondary type of backup the client could also handle simple backup to a USB filesystem with no extra support besides simple backup of the data.
* Local storage like USB drives could be use as one form of simple backup. As well people can backup to their own home storage (if they have any). And the server could be launched on servers of various hosting providers.
* Local family storage could be setup by adding a simple plug computer to the home connected to a USB drive. Even if you did not open up the system to the outside internet your computers would backup themselves while you are at home and when you leave with your computer, simply wait to continue.
* Leaving most of the user useful stuff to the client, multiple backup providers could start up. They would use the server protocol and allow backups to be stored off-site on their systems. And competition could help ensure that they all offer sane prices. Since the client makes it relatively trivial to move to a new one.
* A bonus would be support for backing up things that are not always there. eg: Backing up the contents of a USB drive while it's not there.

#backupandrecovery   #backupandrecovery   #backupandstorage
Add a comment...

Post has attachment

Post has shared content
Today I met with the White House Office of Science and Technology Policy. In their conference room, there's this big shelf filled with artifacts from NASA -- models of NASA robots, even a meteorite older than the earth itself. But the real prize was what's next to the 4.7 BILLION year old Meteorite: a full scale Aperture Science Handheld Portal Device.

Somebody in OSTP has a brilliant sense of humor.
Add a comment...

You have an all you can eat order form. The Qty. for each item only has enough space for two symbols and you want to get as much as possible of one single item. What do you write?
The normal person: 99
The programmer: FF
The mathematician: ∞
Add a comment...
Wait while more posts are being loaded