Showing posts with label ideas. Show all posts
Showing posts with label ideas. Show all posts

Saturday, August 15, 2015

Pwd Rules

This happens to me all the time: I have to create a password for a site that I dont use everyday, but its an important site, like a bank or insurance site. The site usually has some wierd set of rules that I obeyed when I created the account, but have forgotten now.

So when I go in after a while and have to get in, I have some inkling of my password, but cannot get past the auth page because of some arcane rule that is not obvious to me then. Of course, I go through the reset password procedure and THEN they list out all the requirements for the new password.

If I could see the rules one page ahead, though, I'd probably be able to remember my original one!

So here's the idea: A website called PasswordRules that stores the password rules by site. You can query by site to get the rules for that site and use it to login. Simple!

Better yet: Browser plugin that pops the rules up when you visit the site.
More: API for contributors and website owners to publish the rules programatically.

Design:
  • Create a website that has:
    • add new site and its rules
    • edit site rules
    • search for rules by site
  • Create an API that allows:
    • Search for rules by site
    • [optional] add/edit site rules
  • Create a plugin for Firefox, Chrome, etc that:
    • Uses the API to search for the current site and load up its rules in a popup

Thursday, August 14, 2014

Idea: A screenreader that's a market place, not a queue

Had this idea while in a meeting at work today where ADA was being talked about; and how making a page ADA compliant required reordering the markup - sometimes entirely.

What if the screen reader didn't read sequentially - like a queue of people who talk to a teller - and instead read everything that's on a page at the same time - almost like a marketplace where everyone's talking at the same time?

I'm not blind, but I have to imagine that this is how blind people perceive the real world anyway - as a cacophony of sounds that they have to filter out noise from. This is not that different from fully able people sensing the world and filtering what's important.

If we had such a screen reader, the document markup could be the cue for how "loud" each element would be, child elements could be "muted" in favor of the parent and so forth.

Todo: check if screen readers already do this.

Saturday, January 04, 2014

Idea: Stable visualization of a codebase

I have been thinking lately about visualizations of a codebase - spurred on because of recently rediscovering Software Catography and its successor - Codemap. Coincidentally, I also wanted to create my personal website as a "visual representation of the web of connections that it is", which essentially boiled down to a stable visualization.

When I looked at the tools that are currently available to do this, it seems like they are overly complex. The closest was Gource, but it focuses on the people who worked on the code and doesnt generate a stable visualization.

So here's my idea:

  • The visualization will be created from the commit history of the codebase.
  • Once created, the visualization is not a snapshot, but can be enhanced over time to show changes. So the output format should contain the history of changes.
  • The visualization is essentially a Treemap-ish diagram with time along the X-axis and size along the Y.
  • Each object(file or directory) is drawn as it comes to life in the commit log and is represented as a rectangle.
    • Position: The first object  that is created gets the position x=0 within its parent, the second gets x=1 and so forth. Once assigned, these positions are permanent even after the object is moved or removed.
    • Dimensions: The width remains the same for all: files have a width of 1 unit and directories have a width equal to the sum of the widths of its contents. The height is equal to the size of the file.
  • When an object is changed, its old size shows up as a faded outline within the newly sized rectangle - somewhat akin to the age rings of trees. Size reductions may show age rings outside the current rectangle.
  • When an object is moved, its old position shows a faded outline and objects after it do not move to take up the position.
  • Similarly when an object is deleted, its old position shows a faded outline.
  • Keeping the visualization contained: This is where the Treemap concepts are helpful. The complete visualization's size will be calculated inside-out: the size of the deepest directory will control the % contribution of its parent and therefore transitively its grandparent, and so forth. This way, the visualization can be contained in a finite space. At its smallest size, each "rectangle" will be reduced to a line: the position still remains as described above, the width is reduced to 1 pixel and the length is still the size of the file. No rings are possible at this level of compaction.
  • Controls: The visualization will have:
    • Play: A way to see the evolution of the codebase a la Gource
    • Zoom in and out
    • Time Filter: A way to filter out older rectangles. This will essentially show the current state of the codebase, but since all positions are fixed, it will give an idea of how far the current state is from the original.
    • Object Highlight: this will highlight a particular file or directory to "show where it is in the map"
    • Object Trace: This will high light the path of the object throughout its evolution in the codebase.
    • Commit Highlight: Highlight all files in a commit
The advantages I see with such a visualization is that it combines a stable spatial representation of the code along with its evolution over time. Using a treemap representation essentially keeps it bounded so that the view could be injected into current developer environments without taking up too much screen space.

Implementation notes:
  • A quick way to implement this might be using html divs.

Sunday, March 10, 2013

Code as you think

I was in the middle of deciding how to proceed with my current side project when I realized that I'd not yet created an architectural description for it. Since I already have an architecture description tool that I wrote, I have no excuse for not writing one out. So I stopped to think why I didn't create an architecture definition first. Ans: Because it would stop me from doing what I was doing right then, which was to decide:

  • how to separate out the current spike of the language from its sample code and actions.
  • how this will be deployed in and of itself, ie how to dogfood it?
  • how to move from the spike folder to the actual folder
  • how to move from bitbucket(where I'm keeping all this stuff till its public-worthy) to github or a public bitbucket repo (where I expect this to be shared with the world)
  • ...all in the middle of adding commit messages for changes
  • while juggling code between the PC and the mac
  • ... and so on.

As I'm writing this, more thoughts are streaming through, and they're all jumbled together: code ideas, architecture ideas, documentation, behind-the-scenes reasons, journal entries - all in one single stream of consciousness. Stopping now to create an architecture definition would not just slow me down, it would take me down a totally different path. And I don't want to go down that path because the IDEAS ARE JUST FLOWING!

Another reason I didn't want to context switch into "architecture definition mode" was the high cost of that switch : create a new document, decide where to save it, type in the actual content, optionally see how it looks once the markdown becomes html and so forth. IMO, this is also why documentation is out-of-date with code, especially expository documentation like architecture and design docs. The comment-above-code kind of documentation may still be in touch with the reality of the code, but higher level documentation like READMEs and user guides quickly become disconnected from the code they're talking about.

That's when it hit me: What we need are tools and languages that understand the stream-of-consiousness way of building software: code snippets flying into the system from the human mind along with documentation and random thoughts and todos and reorganizations of such thoughts - all of which the system will record in sequence and generate meaningful views out of. 

Imagine an editor that:
  • Allows creation of content without the need to pick a name or a location for it.
  • Knows enough about the content to provide appropriate editing affordances - syntax highlighting, preview, whatever.
  • ... and autosaves it for you - no separate save step required
  • ... then allows tagging it as being "of a particular type"... whatever that means to you
  • ... then allows you to seamlessly retag it differently
  • ... including being able to arrange it in hierarchies (if that made sense) or graphs (ie nodes and links - if that made better sense)
  • Presents all existing content and its associated "tag graph"
  • Allows linking content with each other
  • ... or even parts of content with each other.
  • Shows the links visibly
  • Allows tagging parts of a document as being of a different tag, but still housed in the source document. Content tagged thus will be displayed in both tagged content, but the second display will be a reference to the original.
  • Tracks all changes to the content graph in a stream-of-consciousness journal that's visible and accessible at all times so that you can see what you did before in the order that you did it.
  • Allows you to add your own manual entries to the journal to track your thoughts at that point in time.
Such an editor could be used to make the context switches mentioned above as seamless as possible. The only bit of automation it promises is the decidedly "big brother"-like journal, but in my opinion that's something the computer should do for us - track what we're doing automatically so we can look back and "see the trace of our thoughts from before in our actions". Features like seamless switching allows easy creation of design, code, comments  and code in back-n-forth style, for example; while the ability to link content allows for todos to be linked to the point in code where change is required, but still can be maintained as a separate list of todos for easy review.

To allow easy adoption, such an editor should add the following:
  • Treat an existing directory structure as the basis for its tags: folders become tags, folder structures become hierarchical tags, files become fully described hierarchical tags. If starting from a blank folder, however, allow for the initial tagged content to remain as files until the user discovers the "natural order of things" for himself.
  • Seamlessly convert files to folders when the user renames them.
  • Save the changes to the underlying folder as soon as possible so that there's no "last minute unsaved content change"
  • Allow for some scripting/automation triggered by content changes.
UI Design Notes:
  • Start the editor from a particular dir, which we'll call <dir> in these notes. The startup sequence allows choosing the default language to use optionally, else the default is text.
  • The editor starts with two windows: a default one titled "<dir>" and the journal titled "stream". The former is editable, the latter is read-only. 
  • The default window is "in" the default language (if chosen), ie is syntax highlighted for that language. Typing text into it and hitting enter causes text to show up like any editor, but it also causes the same text to show up in the stream window with a date/time stamp. Internally, the text in the stream window is a reference to the line in the original window, not a copy of its contents. In the background, all the text changes are saved back to <dir>.
  • Typing F2 allows you to rename the window's title, which is also its tag. The editor allows creation of hierarchical tags like "tag1/tag2/tag3". It also allows setting the language of the content by naming the tag with the format "tag.type" - similar to how files are named today.
  • Typing an esc puts you into "enter a tag" mode. a small edit area (or vim/emacs-style command area) shows up which allows you to type in the name of a tag such as "todo" or "comment" or whatever. hit enter when done and two things happen: the edit area disappears and a new window titled with the tag just entered shows up and focus moves to that window. Type text and hit enter to enter content in this mode, while it also appears the stream window like before. if the tag is already present, focus shifts to the existing window for that tag and text input proceeds from there.
  • Selecting text and then pressing esc to select a tag will leave the text in the original window, but add that text to the window corresponding to the chosen tag (via reference again), as well as denote that the tagging event occured in the stream window. Presssing ctrl-esc instead of esc will cause the text to be moved to the tag chosen.
  • Ctrl-tab will cycle through all open windows and allow you to make any window the main one. The same esc and enter behavior holds for all windows.
  • Everything is save-as-you-type. No ctrl-s required. The tag is the name of the content.
More features:
  • Windows are tiled, not free. this allows pinning of windows to specific locations so as to create familiar screen layouts.
  • A third kind of window is a modal window, which can be in input or output mode. this will allow creation of repls. issue a command when its in input mode and view results when its in output mode, which is read-only. The "stream" window can also be recast as a modal window, allowing journal entries in input mode.

Ok, that was the easy part. What about a s-o-c language?

Such a language would:

  • Allow for programs to be expressed one chunk at a time. The language runtime could be invoked at any time during the expression, at which event the runtime would evaluate the current set of chunks and determine if it can do anything with it. If the runtime is a compiler, it would decide if object code can be generated or not; and if its an interpreter, it would interpret the chunks available.
  • Not have comments. Instead it would allow a more generic interpretation of commented content as "chunks from another language embedded within itself, which could be "understood" or "translated to itself" given an appropriate cross-interpreter/compiler.
  • Allow chunks to be linked to each other and handle link lifecycle events. Eg: chunk A is related to chunk B could imply that if A changes, B should also change; thus flag such changes on A as requiring changes in B as well. Or, if chunk A is deleted, chunk B should be deleted as well because it doesnt make sense to keep it around anymore. Or, if chunk A is created, a corresponding chunk B MUST be created.
More thoughts on the language TBD.

Implementation Notes
Editor:
  • Use node-webkit, codemirror and sharejs to allow evented entry of text
  • Make a repl using a bot that's sitting at the other end of a sharejs connection.
WIP notes

Started work on a prototype using node-webkit. A basic version of the UI Design has been implemented, and the updated version of this post reflects some experiences from this implementation.

Thursday, February 14, 2013

JMeter as a webapp

I've long felt that it might be a good idea to create a web-based controller for JMeter. JMeter has been and still is a good tool for performance testing. Its Swing-based UI, however, has always been a bit of an eyesore and IIRC a memory hog. The one interesting behavior it has, however, is "save on tab", ie any data you type is automatically saved when you tab out of the control. Of course, the save happens to memory, so you still have to Ctrl-s to actually save. And then there's the "feature" of saving any chosen node as a new file. Why, I'll never know.

Anyhoo, my idea is to create a webapp version of the JMeter UI. The possibilities are interesting:

  • Even a straight copy of the existing design should right away make a JMeter instance accessible via the intra/internet. Add features to actually kick off performance runs and you have a saucelabs equivalent for performance.
  • If instead we tweaked the UX a bit and actually model the Performance testing workflow (which, btw, the JMeter XML supports inherently), the result could be vastly better. Create the test run workbench separate from actual runs, show reports as a separate view instead of as attached nodes, and so forth.

Sunday, February 10, 2013

Explicit Language constructs for architectural concepts

You know how we have modules, layers and environments for code but none of these concepts are actually IN the language?

Modules are typically self contained chunks of code that expose an interface to some useful logic. It is not important how it does what it does; but it IS important that we be able to find the module and call it. In most OO languages, we use functions or object/class methods to hold such logic, but then have to actually build the concept of module using user-visible mechanisms.

Layers are another conceptual tool for organizing code. This typically reduces to packages or namespaces, but neither of these concepts actually enforce a hierarchy or "can/cant call" rules. Similarly, it would make great sense for dependencies themselves to be layered this way, but no language that supports imports also supports defining the layer for those dependencies.

Environments - which I mean the container or context in which code runs - are similarly underspecified.   Most languages define a main entry point and are call it done. Why not have a generic way of stating an entry point that includes a declaration of the environment (or even environments - plural)? The declaration of environment allows for the actual standup of said environment and trigger of the main code to be separated from the code itself; while the ability to declare that "this code runs in both gui and cli environments", for example, allows explicit declaration of such purity in the code and allows determining if this is true statically.

Well, this idea is to explicitly add those concepts to the language syntax itself.

The idea is that the implementation explicitly states the architecture assumed. Two advantages:

  • The intended architecture is explicitly available to future developers
  • Tools can be brought to bear on code directly, maybe even in the language
Of course, this does mean that the language knows only about specific types of organizing code. Within that limitation, however, things are much more clear, IMO.

Monday, January 14, 2013

"Whose Turn?" App

This idea is from a recent outing of 3 couples with kids under six:

A Phone app that tells whose turn it is to change diaper/ deal with the lil monsters / feed them / whatever.

You choose the activity, it keeps score and tells you whose turn it is.
Once you're done you say you did it, and it updates the score.

Bonus points for doing things out of turn is a configurable setting :)

Tuesday, December 04, 2012

An agent-based parser for a structural IDE

I had been thinking about the difference between text editing and structured editing and it dawned upon me that the latter will never win simply because it is not "natural" to edit structurally and that such editing is not forgiving of "mistakes". Sadly, text editing is good by default; and structured editing is not so by design.

We have a grammar, so we use it. But do grammars have to be policemen all the time? After all, we humans created grammar; and I have to think that we created it by convention to make sense of the otherwise chaotic barrage of communicative sounds we invented. So if natural language grammar is intended to be "guidelines and lighthouses to the islands of information in the oceans of noise", why shouldn't computer language grammar be the same?

The problem, of course, is that such a grammar would be non-deterministic. How would you specify a terminal symbol with any kind of certainty when any possible other symbol could intervene and has to be considered noise in a very context-sensitive way?

I wonder if there's an other way; and the rest of this post will record my thoughts to date on this other way.

Imagine each symbol of the language's alphabet as a cell waiting for user input. Imagine each non-terminal in the language's grammar also as a similar cell, only it waits for symbols to activate.The moment a symbol appears in the input (this could be the read of the next symbol from a file or the next key pressed by the user) the symbol grabs it and announces that its now "active". This triggers "interested" non-terminals to claim that symbol as part of themselves, although the ownership is shared - until all symbols of a particular non-terminal are satisfied and it claims all symbols to itself; and announces that it is active. This triggers the next level of non-terminals to wake up, and so forth.

This sounds pretty close to how Agent-based systems work, hence the term in the title. If I were Microsoft, I'd probably call it ActiveParsingTM.

The key advantage with such parsing would be that ambiguous input could be handled by the system. Also, input that doesn't match any production rule in the grammar would still be allowed into the system; which means that erroneous user input is tolerated, not berated.

In addition, such a system would be incremental from the ground up: although it is backed by a grammar, input is not constrained to "start at the top and end at the bottom of the grammar". It is also inherently event-driven (see design notes below) and therefore should be able to react to editing actions better. This ties in well with "natural" structural editing.

Finally, such a system would allow for hybrid languages that embed other languages within themselves, eg HTML+CSS+JS.

I cannot imagine that this hasn't been thought of before, as always. My Goog-fu not being upto snuff, however, I was not able to ferret up anything concrete. Of course, I did find a lot of references in the NLP literature to agent based systems and automata based parsers and such like, but they usually devolved into stats or deep math to prove the value which I couldn't comprehend.

Design Notes

  • The system works on heartbeats called ticks. Everything happens once every one tick.
  • There are cells. Cells wait for external input. All cells are guaranteed to get a new datum input into the system within the same tick. Cells activate (or not) during that same tick for a particular datum. Cell activate recognizes a single symbol or character.
  • There are cells that wait on other cells for input. These cells  activate when all the cells they need are activated. Their activation recognizes a token or non-terminal.
  • The input could be control input (such as the delete key) as well. This would result in previously activated cells becoming deactivated, changing the state of the system. The impact would be felt in the system over the next few ticks, but also means that it would be local to only those parts of the grammar that are indeed affected by the change, not universal as is typical in standard parsing systems.
  • At all levels, activation triggers an event that can be handled outside the system. In particular, recognition of a non-terminal could be handled as highlighting it as a known keyword or identifier, etc. This is how syntax highlighting would happen
  • At a sufficiently high level, activation of a non-terminal means something larger than syntax highlighting, eg, validation, interpretation/compilation or execution can happen.
  • Finally, all cells can be supplied with a prerequisite activation signal which would HAVE to be "on" for them to become on. This would allow controlling the "viewport" of visible cells easily enough to constrain the calculation to that part of the text that is visible.
Update
Found it! An undated paper about Object Oriented Parallel Parsing for Context-free Grammars. Although I couldnt find a date on it, references to Symbolic Lisp machines dates it to at least the 90s if not the 80s. Google search terms "parallel parsing computer language" 

Thursday, November 29, 2012

Idea: Email + Document Server in a box

I dont yet have a good name for this concept, but here's the idea anyway:

You know a lot of good ideas are better put in an email instead of a separate document because people will not open that attachment?

What if you could actually then easily extract out such content and make it available as a document? Further, what if the original recipients of your email continue to see it as an email, but it has a separate life as a document?

This is what I mean by the Email server being a document server as well. Each email could contain multiple such documents; or each email could itself be a document. The inline vs attachment line is blurred, but at the same time, individual bits within emails become addressable entities outside of the email thread.

Right now, we save such emails, re-format them as documents and stick them in a wiki or sharepoint. What if the Email server did all this for us?

Wednesday, October 03, 2012

JS++

Sometimes you dont think of the obviousness of an idea until someone hits you on the head with it.

Typescript was released today and it finally dawned on me (despite the very visible existence of Coffeescript in that exact same mold for ages) that creating a pre-processor for Javascript that cleaned up all the messy bits for you was a viable option.

After all, that's how C++ came to be, isnt it?

But if I were to do it, I'd do it a bit differently:

  • I'd not have the TS-style classes, interfaces and the like; instead I'd just have a simple object creation syntax that's better than the default JS one or the JSON-style one
  • Modules would map to CommonJS or AMD modules, but they would behave (from a deployment perspective) more like Fantom pods than the module-within-module craziness that TS seems to expound. It sucks in Java and I'm convinced the Fantom model is better simply because "One conceptual container == one deployment container" is such a reduction in cognitive overload.
  • Embrace hoisting for "top-down style" writing of code, but introduce true scopes to retain sanity.
  • Some implementation-neutral way of embedding the ultimate javascript into its host environment through the preprocessor. This is something tools like yeoman and bower miss, IMO.

Those are some of my main themes, I'd say; there's more that another reading of the Crockford bible should produce.

Design notes: 
  1. Since this should be a simple "if you see this jspp code, spit out that js code" type thing, I'm thinking something as simple as StringTemplate might suffice, although it might be too early to speculate that far. If it did though, I'd build the whole preprocessor as a series of calls to a function called becomes(srcstr,deststr)
  2. The implementation will have to support output of multiple files from the processing of a single input file.
  3. If this turns out to be small enough, we could finally have a JS language that runs transparently on Rhino. CS currently faces the issue that its main code is larger than the size that Rhino can handle.
Postscript: I'm tentatively calling this idea JS++ simply because that's how C++ came to be; but I can see how this might not make good marketing sense. If so, here are some alternative names:
  • Better Javascript : .bs :)
  • Easier JS: .ejs

Wednesday, September 19, 2012

Idea: Use a map framework to depict code

Today's XKCD comic and its interpretation as a zoomable view using Leaflet had me thinking of the possibilities this presents:

Software cartography already demonstrated how code could be converted into a map. It even has the interesting property that it attempts to map the mental model of the code instead of its specific implementation - which IMO is way better than something like Code City simply because the city (or country) looks the same even if a few buildings disappear - if you know what I mean.

The only missing piece is scale - how do you scale this up to larger and larger codebases? Well, using a map engine is one way, IMO.

The problems of scale have already been solved there, as is that of display form factor: most of the map frameworks are already mobile-ready. The UI metaphors are familiar with most people too.

The only possible thing that detracts from my grandiose view of an n-dimensional version of CodeBubbles  to depict the true complexity of code is that map engines are decidedly 2 dimensional. But even that is a weak argument - layers provide sufficient degrees of freedom to annotate the display appropriately.

Thursday, September 13, 2012

The "Save Download to Category" Browser Add-on

I download things all the time, and all of them to go to the same place always - my downloads folder. Its nice to have a single place to look at all the things I downloaded, but most of the time, I'm moving the download to someplace else once its downloaded. This two step process is annoying.

So the idea is this:
Have a browser add-on that allows you to pick where to put the downloaded file(s) with one click. Preferably with category tags that map to actual folders.

Todo: Check if something already exists. It most probably does.

Sunday, June 10, 2012

What if there IS no source code?

...yeah, kinda linkbait-y heading, I know.

What I mean is: What if there is no single source of the code?

Let me explain.

Typically we have source code. Its written by someone, stuck in a source control system somewhere, changed by others and so on.

The projectional editing school of thought modifies that picture somewhat by suggesting that we could have different views of the same code - a functional/domain-specific one, a folded one, a UML(ish) one, a running trace one and so forth. The relationship between the source and the multiple views, however, remains decidedly one-to-many.

What if instead of this master-slave relation, the different views themselves were the source? That is, the "whole picture" is distributed across the views - like a peer network?

Assuming the views are consistent with each other, modifying the source in one view should retain the true intent. But is that possible even? Views are, by definition, projections of the code; meaning some parts are included in the view and some aren't. So how would we maintain consistence across multiple views?

Two paths lead from here:
One: We cannot. This is why we need the "one true source" that's the parent of all views.
Two: Maybe we don't need consistency all the time. Maybe we can do with the "eventual consistency" that big data/nosql guys are raving about?

#2 seems like an interesting rabbit hole to explore :)

Thursday, June 07, 2012

Naked Objects for Mobile

Why not, I ask?

The Naked Objects Pattern has been an idea that I've always found interesting and useful for quick-n-dirty apps.

The Apache Isis way of NO seems a little bit overwrought (as most of Apache projects are), but I've been following JMatter for quite a while now and have been impressed despite it being a Swing-only UI.

For the simple one-user app, the NO pattern seems sufficient: the thing gets built quickly, there's lowered expectation of customized UI from the user and probably even platform portability.

Monday, May 07, 2012

I am Jack's total lack of unlayered design

A nebulous idea grows within me partly from GUTSE's insight that all programming is mere translation from one  language to another: All Von-Neumann machines are merely SSI (Sequence, Selection, Iteration) executors that essentially facilitate these translations. Why not enshrine these facts directly, then? The design of Jack, therefore, stands refined to:
  • A simple SSI executor at the base with no idea of what data is beyond the generic map that it reads in as input.
  • A data layer (built using the SSI + its environment) that actually does know about the bits of data stored in the map. This layer essentially provides the box/unbox, serialize/deserialize, parse/print dual functions as required
  • The data transformation layer (built using the layers below) that enable and reason out the transformations declaratively so that they can be composed, collapsed etc.

Wednesday, March 14, 2012

Early thoughts on Jack

I found this writeup from 2009 in one of my umpteen storage media and thought it was evocative enough to publish. My later ramblings on the topic of Jack have been attempts to concretize the ideas rather than gaze starry-eyed at the wonder that is that ephemeral topic of a versionable language. This one is more of the latter kind than the others. There are 2 parts to the writeup: a exposition on top and notes at the bottom. The notes are actually the outline of the exposition, which is essentially unfinished. Except for some cosmetic formatting changes, I've reproduced the piece as is:

Think of an aspect. An aspect usually takes a chunk of code, and in the most general case (called the around advice), wraps around that chunk an envelope. This envelope acts as a guardian to any entry into the chunk and is not only capable of altering expected behavior of the chunk by altering its inputs, but is also capable of deciding if the chunk should execute at all.

Think of a monad. It effectively does the same, except it does it to separate non-functional bits from functional ones.

Think of an ESB. It too, does the same; except that it does it at a much higher level of granuality - at that of a service. Taking the generally understood definition that services are suitably large components that house some specific business functionality, an ESB orchestrates the sequence of operations between these components to achieve the affect of an application.

Now think of how we modify code. We do the same thing - alter the expected behavior, decide if the code should execute at all, (re)orchestrate the sequence of operations to achieve the effect we're expecting.

We are the aspect, the monad, the ESB. 

However there are major differences between us and these constructs that act to the disadvantage of humans while maintaining code:
  • each of these constructs have the feature that it describes the change it effects on the component being changed. The humans changes are available only as diffs on an external tool - the version control system
  • the human's change is at a textual, character level; not a statement/method/package/module/unit of execution level.So the change is not percieved as a change of language elements to effect it, but numerous characters being shuffled around
While this might seem like a tirade against text-based programming and possibly making a case for structural editors, there's more than that. The key problem is that the unit of execution is not identifiable. Statements in modern languages are anonymous, except by the line number of the source file. If they were identifiable, we could express something like "i had to remove the 5th if statement to after the assignment of var foo" instead of "cut line 150-234, paste at line 450". The former is what we usually do when we talk about it, but there's no direct way of enshrining that in a machine readable way.


Whats the use of such a feature you ask? Well, imagine a language that allowed us to identify the statements, and then express addition, deletion and modification of statements within itself. Something like:
insertStmt module3.class4.method1.if#5, AFTER, module3.class4.method1.letvarfoo
 The fact that the statements of code are addressable allows us to refer to them in a logical manner and the fact that the operations carried out to cause the code to change are operators allows us to maintain the change itself as code.Similar to insertStmt there'd be addStmt, deleteStmt and modifyStmt operators; and obviously we can extend the concept of a function to these operators too, so that the complete conversion from one version to another is expressed as a single operation - a changeset in code, if you will.Producing the next version of your app is no longer a snapshot activity - it can become incremental. Further, multiple changesets can be "compiled" into a fix pack of changes to produce any version at will. And all changes are expressed as logical changes, not textual deltas.

More importanly, think of what the language (or its units of execution - the statements) would have to support for this to work. They would have to become self-contained modules. Self-contained micro services, if you will, which can then be "orchestrated" my moving them to the location in the code which will cause whichever version we desire to be effected via these transformation functions. Code therefore becomes easier to change.

Now lets take it to the next level, and define these operators at all levels of abstraction/modularization that the language supports. So we'd have addMethod,addMethodArg, addModule, etc.


Notes:
  • Identity
  • modularity
  • esb-style orchestration
  • not just run time, but also statically, we can express the change occuring. which makes is amenable to machine learning.
  • automatic modularization/aggregation is the key to useful versioning.
  • forget versioning. what i'm really trying to do is to discover the steps in the computation being carried out that can be abstracted out such that an esb can act on it.
  • deriving the higher order sequence from base description
  • there are only 3 basic constructs in programming - the assignment, the goto (including the implied goto by the instruction pointer), and the conditional ie if. if is usually followed by the goto, and all instructions of the JZ, JNZ variety are combinations of the if/goto. so the real inflection point for the sequence of operations is the if. we can consider any block of code before an if as a single block with appropriate inputs, outputs and context, and similarly any block of code after an if. each if clause represents a micro service. if therefore is the micro esb. 
  • so partitioning of code cana be done based on ifs. now if we take all the ifs at the same peer level - within a method, class or package/module, (or even app) and find the same conditions being checked, those paths can be collapsed, or refactored - this is similar to the ideas in subtext. find the right partitioning of code so that it can be expressed easily. 

Tuesday, February 07, 2012

Two ideas to improve impress.js


  1. Add a "debug mode" that shows the center and bounds of the currently selected step. This is useful in aligning 3d elements
  2. Add incremental positioning attributes. That is, step n's position is based on step n-1's. This is really useful considering the normal flow is to move from step n-1 to n. The new attributes would be "data-dx", "data-dy", "data-dz", "data-rotate-dx" and so forth.
Implementation note:
impress.js's main logic where the position of each element is determined will have to be enhanced to use the previous element's position when these incremental attributes are used. Once they're calculated, however, the rest of the logic should work as expected.

Monday, January 16, 2012

Rules for life

This is pretty similar to Slots, but has more direct application, methinks.

The app will have two main actions: Jot and Find.

Jot drops you into a Slots interface. You put down a sentence and define slots in it.
That's the first half. The other half is another Slots sentence that asks the question for which the first sentence is the answer.

Eg: Sentence1: I kept [the tools] in [the bottom drawer].
      Sentence2: Where are [the tools]?

Find drops you into a Slots interface populated with Sentence2-type sentences.

Simple. But I think it can be real powerful once we add true Slots functionality and automate actions based on query results.

Yes, I know there's www.ifttt.com, but this seems much more generic and less program-mey.

Wednesday, December 14, 2011

OS.next

  • Hardware Compatibility:
    • Run on x86, AMD
  • Software Compatibility:
    • Support Win PE, Unix executables and Mac executables natively.
  • Boot loader:
    • Support booting from usb, sd card
  • OS:
    • Ability to have the whole OS on portable media (eg SD card) if so required
    • File system:
      • Tagged storage instead of directories
      • Sensible file system hierarchy that combines win/unix/mac concepts
      • FS shadowing for legacy executables so that they think they're still "at home"
      • Ability to move /home to an sd card if so required. More generically, ability to configure OS to have any "standard, should be on boot volume" directory elsewhere.
    • Memory:
      • Nothing special as of now.
    • Disk:
      • versioning of files built in. ie some kind of journaling file system.
      • Security built in - encrypt a file, directory, mount point or whole system.
    • UI:
      • Light, capability-based UI framework.
      • Support for UI paradigms such as radial menus - controls that use Fitts Law better, basically
      • "Back to the place I was before" mode for people not ready to make the plunge
      • Keyboard support for everything. EVERYTHING!
    • Software in general:
      • Solve the package manager/installer problem for good
      • Solve the registry vs 1000s of config files problem for good
      • Rolling versions of the OS, easy revert back to previous version
      • Tool to see files changed due to any install/ version upgrade built into the OS
    • Shell:
      • normalization of options and formats across at least the busybox commands
      • autocomplete for command options got from info/help files
      • oo-ize commands (ie dir.ls instead of cd dir;ls)
      • Structured editing of the command line a la zsh
Updates to this post as I think em up :)

Update #1:

Show a schematic box diagram of the computer system as it boots up and have icons (or text) depicting the status of each box in the diagram.

This is in lieu of the "screens of flying text" a la linux or the "startup progress bar that never quite ends" of windows/osx

Also missed out adding my idea about intelligent workspaces, but it should be in this list.

Sunday, December 11, 2011

Information Density and textual vs visual

I was reading the Wikipedia page on Information Theory as part of my read the wiki project, when it struck me that there's possibly an objective way of measuring the effectiveness of text vs visual programming languages using Information Theory.

The central concepts (whose math I'm admittedly unable to fathom) are those of information, information rate, entropy and SNR. One of the age-old cases for text-based programming ( and therefore against non-textual programming languages) has been that it has very low SNR and the "information density" is high for given screen real estate.

Is that really true, though? How much "noise" does syntax add? On the other side of the spectrum, I've seen infographics that assuredly deliver more "understanding" in the given screen space than the equivalent textual description. Is it possible to design an "infographic-style" programming language that packs more power per square inch than ascii?

It would be interesting to do some analysis on this area.