Shared publicly  - 
 
The recent news of IEEE seeking to develop a new Ethernet standard got me thinking. Right now 10Gbps networking equipment is still terribly expensive and definitely not aimed at home networking needs, but if such equipment suddenly became cheap, commodity hardware there'd be a lot of interesting things one could do, like e.g. I have a Linux-server running 24/7 with Btrfs in raid0 configuration: I could just install all my applications and games there over the network, resulting in much faster operation than from local devices.

The above is still a very simple usecase, however: imagine a network protocol that is first and foremost designed for caching of filesystem data over the network and all major OSes supporting such a scheme -- if you noticed your system wasn't as fast as you want it to be and the slowness was a result of storage I/O bandwidth not being enough you'd just buy a small, dedicated box with 4GB+ RAM and one or more storage devices inside it, plug it in your network and POOF, all the computers in your network would get increased performance. If the box was specifically designed without a GPU, with a CPU with hardware SSL-, CRC- and compression/decompression- facilities and an enclosure designed for efficient heat dissipation you'd most likely even see a small benefit in your electricity bill over time as the other computers' drives could to go sleep, not to mention the obvious comfort of less noise and heat from them -- the box itself could easily be tucked away somewhere out of sight where it bothers no one. Of course such a scheme would require low-level OS-support for that and proper security-measures, though the boxes aimed at home could ship by default with permissive restrictions and offer a web-interface or similar for those more advanced users who need stricter permissions.

The more RAM the box had in it the more stuff it could offer instantaneously at full network bandwidth, with hardware compression/decompression it would be able to cache much more data efficiently than e.g. Windows does even with ReadyBoost, and the more networked computers you had utilizing the box the bigger benefits you'd gain overall with the box caching the most-used data for fastest retrieval -- for example, if all the computers in your network used Firefox as their web browser the box could cache all the related files to RAM and serve them from there at 1.2Gb/s speeds, benefiting every single person in your home.

Similarly, the box could not only be used as a cache, but as actual installation-target for large applications/games that you don't need on the go, but you need when you're at home: you'd just right-click (or utilize some configuration utility or whatever) on an application and select "Transfer to Box" or "Transfer to local machine from Box" according to your needs, either saving local storage space or allowing for offline-use.
1
Mike Mahoney's profile photoヽポール's profile photoElias Mårtenson's profile photoNita Vesa's profile photo
14 comments
 
iSCSI is what you want. Not as fast as a local SSD but pretty good. Even current gigabit will crank out a good 80MB/sec with iSCSI.
 
I already get such speeds with just regular CIFS-shares, plus 80MB/s is a lot less than what I get out of local storage devices. So no, iSCSI would be a downgrade and do nothing useful, plus it actually doesn't do what I was talking about here anyways.
 
This is an interesting thought, but how is this better or different then running the application or share on terminal server? 
 
With the application running on a remote system and the local system only being used as a front-end you run into various kinds of obvious issues, like e.g. the more clients you have the more powerful the remote system needs to be, it's not efficient enough for graphics-heavy tasks (games, photo-manipulation, games, videos, so on) and the kind of a box that I described could be added on the network later on when the need arises as an after-thought.

It wouldn't be useful for most Enterprises, but then again, I only spoke of home-users anyways.
 
There are caching boxes which also does compression we use for wan links
 
If you want to increase performance on an Ethernet segment, try to increase the MTU setting on the ip stack, if your going to be passing routers also look at the df (don't fragment) flag. Ethernet is not really a efficient protocol, there is a lot of overhead. Another item you could look at is adding another network card to the hosts.
 
Tweaking MTU or DF ain't gonna help much, you know. The theoretical limit of 1Gbps Ethernet is 125MB/s and I'm already hitting ~110MB/s as it is. I could possibly squeeze one or two MB/s out of it with some tweaking, but that's it then. Adding new NICs isn't a solution either, what with having to run multiple cables, buying new routers and well, Windows doesn't even support bonding of NICs anyways -- atleast as far as I know. Linux has two different ways of bonding NICs, don't know about OSX.
 
I have to correct myself a bit: apparently some manufacturers ship with tools for doing bonding on Windows, like e.g. Intel does. But those work only if all the NICs are from the same manufacturer and supported by the tool.
 
Depending on data flows and patterns you can setup separate segments and routes and turn the hosts into routers. Those are good numbers. I have not done a great deal with peer to peer networking. If your bottleneck is with the network. You could also (depending on the switch) tune the port settings. Perhaps applications can be tuned with less overhead required for communications.
 
Your right, also look at the ram on the NICs and have the current drivers for them
 
One additional point prioritize the traffic. 
 
I'm not entirely sure if you've realized that I'm not talking about corporate networks. In a corporate environment bonding of NICs is more-or-less essential for the fail-safe capabilities, there it is much easier to prioritize traffic and manage its flow since you know what traffic can and should be expected and so on, but at home usage-patterns change rapidly and the changes may be quite large and fail-safe in case a NIC goes down isn't as important as extremely simple plug-and-play - functionality and protection of personal information. Also, prioritizing traffic still doesn't raise the maximum bandwidth available, it's merely a quality-of-service - improvement.

In hindsight I guess I should have waited with writing my rant so that I can provide more concrete examples and details, I just felt the need to write it down while I still had the flow going. It's just that most homes these days have home networks, but no one seems to be trying to help people utilize all the capabilities of such networks -- none of the families I know of do anything other than share Internet connection for two reasons: they do not know what can be done, and doing those things aren't made simple enough. Hell, none of these people even know how to share a file or folder on their Windows-computers let alone do anything more complicated.
 
Nita, i think i understand what you are saying.  And also, without knowing specifics i am guessing you are looking at a private cloud solution for a home environment. maybe application as a service type thing. for the purpose of improving throughput on a LAN segment. Historically there are two options - distribute the resources you wish to share or bring the resources closer to the end user. 

There are appliances that do the caching and compression, and are most likely on ebay such as riverbed, packeteer etc.  

You could look at fiber channel on the hosts. Or perhaps research proprietary protocols that are native to the OSs you are using. 

Something to remember , when IEEE standards are developed there are inputs are primary from large corporations Cisco, Microsoft and alike.  And they are writing these protocols to improve the overall functionally and best cover the wider area of the community, and their bottom line.
Add a comment...