Profile cover photo
Profile photo
Igor Workbench
A productivity and collaboration platform
A productivity and collaboration platform

Igor Workbench's interests
Igor Workbench's posts

Buffering structured-IO streams (i.e. pipe-ends of active processes) is actually quiet simple:

There could be recursive streams where top-level items would be produced and consumed.  A field that grows, would just be a stream on its own.  Each such stream would be opened separately.

A producing process could provide a whole range of such streams with some accompanying meta-info, and the consumer would select which ones to open.  This can also be seen as a form of output picking. #hurling -the-beacon

Post has attachment

Scripting at the process invocation level

The intriguing question for #hurling-the-beacon is what facilities can be provided at the process invocation level.  Which is also of interest to the user, quiet obviously.

The mechanisms we have inside a process may provide some clues.

Partial function application already exists in the form of aliases, for example.  Locations could provide a more versatile way to work with installations -- where tags could be used to segment the flat command (library) space according to varying needs.  And data-knobs may act as objects that encapsulate data and provide functionality.  Scheduling bots preserves interactivity through concurrency.

Typing is a stumbling block here.  I don't think it is what you would think it is intuitively.  Communication with a process will probably remain in terms of text exclusively.  Maybe because we are in human-space here.  Maybe to decouple from machine representation.  #igor-configuration can contain field validation to assist the user, but I/O is text, I guess.  Typing could exist in some other form.  Like, what goal a process tries to accomplish.  For example, replacing DK-providers.  But:

This can be easily stretched too far, I guess.  A nice thing to muse about, though. :)

Post has attachment
What makes a scripting language a "shell around the kernel"?

From observation, all of `$PATH` are library calls of the global namespace, and system-configuration is basically equal to script variables.  Both are threatened by unnoticed modification (maybe one through the other) and there is no tracing.

The command-line is secondary in this regard.  "Job" formulation with one-time-config needs to be distinguished from command-line scripting (combining multiple jobs), though.  `ARGV`, like the environment, is input to a process.

I care because, #hurling-the-beacon might provide a chance for an exit strategy for super global variables.  What system configuration (in the form of environment variables or something else) does your program actually access?  Only a small fraction I suppose.

Imagine a configuration specification that also formulates access expectations for selected system-configuration values through #igor-configuration -- and possibly modification (System-OTC).  Just tracing, of course, and optionally rules for handling unexpected access (like notification).

Configuration is super global, for sure.  It's a matter of making it comprehensible.

Post has attachment
The parts about taskonomy and how blacksmiths organize their hammers are pretty revealing for #igor -configuration.

Update: In short, they put a particular kind of hammer where they will need it, instead of putting all into the hammer closet.

Post has shared content
Variation without duplication

Software is an attempt to mold the mountains and streams of bits and bytes, bestowed upon us by the creators (of hardware), and unlike in the physical world, we have to add contours ourselves.  Well, actually we have to build our houses in the physical world too, but in the digital world we (should) think in terms of blueprints.  Everybody has the same, most recent blueprint, and brings to live whole worlds in an instant, just to produce one little thing and destroy that world again.  Configuration, not operation, is personalization and that needs to be harnessed.  *Cascading configuration* does just that while trying to avoid duplication.

Exactly.  Maybe a little bit about how duplication and dependencies vie against each other for the digital currency of complexity that you have to pay.  OOP-principles may help.  (don't know the German words, Dupli-käh-schön?)

Post has attachment

The description of sub-module locations in the Location Graph post ( is lacking detail.  And there are actually two concepts that need to be distinguished.

On one hand you have Content Units that describe what a location contains.  For example, let's describe the data for the KDE document viewer Okular.  You might want to put the bookmarks under simple folder-synchronization in one module:


But the data for the review tool should be in a full fledged VCS:


These two strings could be in special files in two different modules of one Data-Space and Okular would know where to get and put which data.

On the other hand there are sub-module Location-IDs.  Those can be seen as an extension of the location graph into the domain of a file-system (under the control of a DM-scheme).  Those can be used to configure module sub-paths or address them in scripts.

The problem here is that product data is usually owned by a project.  You cannot add location-IDs left and right if it doesn't make sense to other participants.  Interestingly, it does make sense for large projects to describe the upper levels of the tree with locations IDs and add descriptive comments.  Think about posts or tutorials that describe the file-system hierarchy of the Linux kernel or the Python interpreter.  So, even in the absence of path-specific configuration for those directories, it would make sense for a project to describe those stable directories in a meaningful way.

If the user needs additional location-IDs, they need to be tracked locally (and would most likely be represented as data-knobs).  Given that there will hopefully be a wide variety of specialized DM-schemes in the space between simple folder replication and powerful VCS, supporting sub-module location IDs for arbitrary locations might not work entirely automatically.  The DM-scheme might not work as expected (eg leave empty directories behind).  But recommending to not add untracked files into a project-controlled directory-tree and getting hints from a file-alteration-monitor might turn out to be reasonably reliable.

Note that there is kind of a gray area, in terms of how far such locations actually make sense.  All of the location-graph is about conceptually organizing content.  Theoretically, someone might attempt to track a single line in a file with a location ID.  So if the user needs locations beyond what the project guarantees, it might not be critical (bookmark a file in a browser) or the problem could be approached in a different way (use a tool that knows about the semantics of the content).


A little update to the location-graph that you might find interesting:

Select-one-of-the-same (SOOTS) is a generalization of the fork-groups selection mechanism for all levels of the tree/graph below a workbench.  It removes the need to qualify nodes when adding a node of the same type next to another one, and it allows to add a whole inactive subtree without invalidating/changing the setup.

SOOTS allows easy, temporal merging/injection and subsequent comparison of whole workbenches.  That in turn enables you to keep workbenches task specific!.  You are less inclined to cram all of one project into one WB.  Above: Abstract/refactor common configuration into project-defaults.  Below: use a shared backend for modules and see all branches across WBs.


Close Collaboration

Your e-world (the WS) is your castle.  A place for individuality, privacy, and familiarity on top of common infrastructure.  This is what users want and a chance for software providers.  But it does also mean that one user will usually not be able to work in the e-world of another user when it comes to powerful tools.  The glove just wouldn't fit.

This is an issue when you want to collaborate in one physical location on one screen.  A use case very worth supporting.  Programmers know it in the form of pair programming, but it has far more general applicability.  One executes while others think more strategically, or review during that execution.  Occasionally the roles are switched.

The solution could be another convention to capture the productivity situation within the context of a workbench and transfer it to the e-world of somebody else.  A lot of applications already support a variation of this in the form of UI sessions.  Those are however not meant to be exchanged and they are not even meant to be interpreted by another application.  An itemization for the purpose of collaboration does not necessarily have to be that detailed and should specify resources more generally.  It could contain descriptive categories, default apps, files-and-locations, URIs, arrangement in windows/tabs/splits, geometry hints.  A frontend could coordinate the switch.

The ideal case would be if the multi-WS collaboration is so easy that you don't even consider collaborating in someone else's world.  You could, for example, always carry a drone-on-a-stick with you.  The advantages are clear: If you collaborate in your own e-world you would automatically ensure a working data exchange and get the history of your activities.  You can immediately install small customizations in your own workbench, or share DS-modules to synchronize settings for your collaboration-group.

But it is not only useful for collaboration in one physical location.  Imagine a remote collaboration where someone shares the desktop to illustrate a problem, and you would be able to ask for the parts list and arrangement of what you see on that desktop.  The creation dialog could assist by automatically excluding resources outside of the bounds of the workbench and the receiver could select which items to instantiate with which applications.  Items for a programming task could be files-and-locations, a bug-number, and a selection of manual pages.

In the bigger picture, there is a variety of ways to collaborate by exchanging one or more of: data, control, and itemization, as well as relying on tools like VCS, VNC, WS-drones, and WS-connections.  No new things here and already useful in other contexts.

In addition, if you consider the availability of a video channel, remote and local collaboration become almost one.  Or a gradient.  Meaning you have the same options available and you can switch between local/remote collaboration without much virtual hassle.  It just depends on if you want or can be active on either one or multiple screens.  It would be possible to let only one remote collaborator operate, or multiple local collaborators all have their own screens.
Wait while more posts are being loaded