Thursday, March 24, 2011

Dependency Hell: There has to be a better way

I have just given up on trying to install Open Head Tracker - an open source eye gaze tracker. I've been researching head tracking software to see if my aunt who's currently paralyzed can use something like this, and naturally started checking out the open source offerings available; and given that she's in India, I wanted to look for a Windows solution as that's easier to purchase, service etc for non-technical people.

So open gazer is based on technologies that are cross-platform: Python, Numpy, OpenCV and Qt. The original was tested on Ubuntu, and the author expresses the sentiment that since all the underlying software is cross platform, and the software itself is in python, it should work on other platforms.

He's right in principle. In practice he is so wrong its not funny.

Unlike the linux world, you usually install binaries on windows; you just do. Its a wimpy, windows-y thing. Sorry. All windows software comes that way - and then you deal with Registry hell :)

And although each of the packages above were available as binaries, they were just not a compatible set. 2 days of trial and error later, I got the Mingw compiled release of the right version of each package to be installed and tested individually.

That didn't mean they worked together to show up Open Head Tracker's display. I still get a segfault on cvResize (after debugging through the source - thank god for scripting languages!) and I have no clue why. The only option remaining is to disable use of SSE.

The only way to do that? Compile from source.

If my experience with trying to build Chrome is any indicator, that's another wild goose chase into build-time dependency hell.

There has to be a better way!

Sidebar: The last two days have been a heady roller coaster ride through Numpy and OpenCV code, however. I dont understand most of it - mainly because I was just trying to get Headtracker to work, but there's some cool stuff in there, including in Headtracker's deceptively small codebase. Qt's demos make it look really cool too - too bad the days of the Desktop are done now. 

Sunday, March 20, 2011

Pinning PortableGit to the Windows 7 taskbar

Microsoft, in all its sagacity, decided that you couldn't pin batch files to the Windows 7 taskbar - something that people have been doing for ages, and have come to like.

Googling for the problem showed three main approaches:

  • Converting batch files to exe - bad
  • Putting all batch files in a folder and making a toolbar out of the folder - less bad but still yucky
  • Tricking W7 into thinking its an exe by opening up the hidden folder that contains the taskbar (Win-R, "Shell:user pinned" will get you there), and adding the shortcut manually. This didnt work except for briefly adding said shortcut to the taskbar, which disappeared soon.
Since all I needed was to have a one-click access to Portable Git, I thought maybe the executable could be called directly, so I ran git-bash.exe to test it out. pwd and exit would work, but not even ls. 

So then I looked at what git-bash.bat did extra. Turns out all it does essentially is to add some params to the call to the executable. So here then, is the solution:
  • Select %PortableGitHome%\bin\bash.exe, and drag it to the taskbar
  • Rt click to show the pin menu, and rt click again on bash to reveal the properties window.
  • Change the Target to "%PortableGitHome%\bin\bash.exe --login -i"
  • For bonus points, change the icon to a Git-specific one. You could use the one from GitGUI, or a free alternative one
Note: %PortableGitHome% is not a real variable, replace with your install dir.

Thursday, March 17, 2011

TDD is an all-or-nothing proposition

I've been reading "Growing OO Software with tests" and trying to implement TDD practices in my projects, and I arrived at the sentiment that's the title of this post.

Let me explain.

Non-TDD is easy and alluring because you can choose to ignore certain errors. This is decidedly unprofessional, but in the early stages of the project its invaluable. When you're trying to get your feet wet, and feel around the solution space, you DO want to know the big things in the design you're missing, but you dont want to know EVERY thing you're missing.

Since TDD implies executing code, you do have to fix everything that you've missed before you get to the interesting bits; and that is why I say TDD is an all or nothing proposition - you have to fix each and every bug in your straw man code before you can get it work.

Usually, some of this extra burden is taken care of by the TDD framework that you're using - a typical TDD tool assumes a particular environment (eg, Rails), and has the support that makes most of this go away, but for environments that don't have such support - Javascript-in-browser or console apps come to mind - building the framework while building the app gets to be tedious and frustrating.

That's not to say that even with TDD tools this burden doesn't go away - its just lesser. It'd be much nicer if you could "ignore" errors that aren't crucial to early stage prototyping. For now, I solve the problem by having a prototyping phase/spike which isnt TDD based where I flesh out the solution concepts in throw away code.

Saturday, March 12, 2011

Testing RIAs using pure javascript: the return of Selenium Core

As I try to use TDD to build Fluent, I was drawn to the concept of keeping the environment pure: Fluent is intended to be a Javascript only project, so I didnt see the point of having tools that were not javascript. To this end, I wrote cuke4jas - so that I can write my feature specs in javascript. That still leaves the gaping hole of DOM testing - which most people seem to fill using something like Selenium.

Except Selenium isn't  javascript alone - it minimally need Java (for Selenium RC) to run.

Or does it? Selenium core - the central piece of Selenium is STILL pure javascript; but over the years its been wrapped with the other Selenium bits to make it a complete solution that its no longer available as a separate download even. This is not without reason - the same origin policy ensures that Selenium Core will work only when you've installed it alongside the app that you want to test, which is usually not the case.

Except in cases like mine - where the app is completely in javascript, or at least completely in the browser.

It is possible to use the raw dom functions, or even libraries like Prototype, JQuery or RightJS directly with jasmine to test the DOM, but Selenium core brings to the table the full-fledged support of automating all the user's actions in a browser (with wait versions as well), so that feature specs can be written with true user actions, not imitations of those actions by calling the functions that would eventually be executed.

Design notes:

  • Extract out only the selenium-core bits from any Selenium-RC package. This would be the in selenium-remote-control-xxx\selenium-server-xxx/selenium-server.jar/core
  • Subclass the TestLoop class (see HTMLTestRunner for a good implementation), and implement at least the nextCommand(), commandComplete(), commandError() and testComplete() functions. That should allow for a simple executor of selenium commands.
  • Of course, TestLoop requires a CommandFactory, but the core has a default implementation that does most (all?) of the work.
  • Finally, the subclass will have to take in an array of commands to execute, which nextCommand would provide when requested.
The good part of all this is that you don't need to depend on the TestRunner.html, and therefore can contain all of this within Jasmine/Species/Cuke4jas's Specrunner.html, or even open up a new window and display the app running there.

Caveat:

  • All of this still requires a web-server; the same rules prevent file:/// resources from being used with selenium. I've fixed it with a simple webrick script, although any other web server would work just fine.
  • Once more, this will work only on files served up from the same server as the selenium files are served up from.

Sunday, March 06, 2011

The need to code - my version

JaccquesM (long timer on  Hacker News) posted about his need to code in a passionate article that resonated with me a lot.

My experiences were similar, if not the same.We had some simple computer classes in school which were mostly spent playing Digger and Frogger, but we wrote some basic programs too. There was something visceral about writing something on a screen, and seeing it come live. Basic as a language and DOS Basic as an environment nailed that aspect - and how! You started the pc, typed basic on the prompt and were dropped into an editor that let you type programs that just ran! And BASIC made no presumption of modularity, so you could just use graphics commands in your program because the language had them.

Compare that to any attempt these days to teach programming to kids - all bound up libraries to be imported before the first step taken - and i'm including specific attempts like shoes et al.

When it came time to pick an elective for Pre-University I decided to pick up Computers simply because I didnt want to do Biology, and the other option - electronics - I was neutral about. The instructors were indifferent, and the syllabus was not that great, but I was hooked. I found a friend who also knew basic and we devoured the Peter Norton book on x86 programming. We'd write assembly programs using Peek and Poke in Basic - mainly TSRs for the fun of it. Our other project was writing a 2D graphics editor in Basic. This took us all year because we wrote it on paper using pencils (to erase out and rewrite lines of code that needed to be shifted), and went to another friend's house to use his dad's pc to enter the programs and see it run. There were a lot of GOSUB XXX lines (read sphagetti code), but we pretty much carried the code in our heads and didnt stop talking about it. We finally did manage to get it working, and I believe I still have the batman portrait I drew using it. Have you ever printed anything by calling the DOS interrupt to dump the screen to the printer? That was what our editor's print function did :)

We graduated to Pascal from there because it was in our syllabus, and quickly discovered the joys of Turbo Pascal and all its cool extensions - especially the asm blocks and the graphics libraries. Of course, we wrote fractal programs and marveled at fractint.

From there it was to Turbo C - to revel in the freedom of C. I remember my first C program being pretty mundane - it converted a number into words. The challenge was that it had to print out the number in the Indian way of doing so which is not the simple thousands, millions, billions model. Instead we have the thousands, lakhs, crores model; and I remember agonizing over it to make it work. Mind you I still didn't have a computer, so this was all still paper and pencil and mental debugging.

Here's why I completely agree with Jacques that coding is a drug: I was so happily engrossed in doing it that I didnt get good enough overall scores to get into a CS bachelors. So I picked mechanical instead, and focused on the areas that were computer centric - Finite Element Method, Graphics etc. My interest in graphics had led me to read up the Schaum's book on Computer Graphics and I already knew the vector math for those pieces long before we did vector math. When we started doing engineering drawing using pencil and paper, projection systems were already familiar to me - because I was on my way to build my own 3d Graphics engine using Turbo C++. All thanks Robert Lafore for making OO click for me. No other book since has made it so lucid, at least for me.

That's the other thing - books. Unlike today, there was no easy access to books in India. My best source were the scores of street booksellers who sold old books - and what a treasure trove of books they had. I learnt of Russian expert systems and computer architectures like nothing else, of APL the language in a book written by Iverson, tons of old British computer magazines that introduced me to editors like Brief and Hypercard-clones, and whole lot more - in all a heady whiff of the ocean of opportunity that lay outside the staid old land of E.Balaguruswamy and Yashwant Kanetkar (these are probably still revered in Indian academia). Sadly, those booksellers now exclusively sell pirated Harry Potters and self help best sellers, but thankfully there are Indian editions of good books nowadays.

But I digress. Throughout my undergrad my only access to computers was at college - so most of it was spent trying to get as much lab time as possible. Lots of social engineering went into this endeavor (sucking up to the CS Dept head to allow use of their lab, helping the lab assistant in the Robotics lab to use the only Sun machine, etc), and pretty little code came out, but it was a heady time because anything was possible. I did manage to get the basic 3D graphics engine working (it proudly spun a sphere around, IIRC), and managed to present a paper on (what is now obviously basic) AI at the Computer Society of India Student convention while doing mechanical engineering.

Fast forward to today: I've been an IT professional for 13 years now. I still code as much as possible at work (architecture/design decisions or helping my teams with complex fixes), but I have a healthy github account, and some more side projects at work. Coding makes me happy. I dont want to stop.

Thanks JacquesM for helping me remember why I do this.

Sunday, February 27, 2011

Idea extension: The dead letter box framework

This is an extension of the idea for the delicious replacement. The idea is to treat accesses to a website as messages dropped off - dead letter box style.

Dead letter boxes allow communication to happen without meeting using a pre-agreed signal or format. The idea is to use a web server's log file as a dead letter box for various types of messages. Like so:

404 GET /dlb/tweet/my_tweet_as_an_underscored_resource_name
404 GET /dlb/todo?task=do%20something&priority=high&context=@home
404 GET /dlb/post-blog-entry?title=A%20Blog%20Title&body=A%20uuencoded_string_as_long_as_log_will_take_goes_here

Again, the way it would work is that a job wakes up regularly, greps the web server logs for these error messages, and does what's actually expected from each of these.

Subversion of http's actual intentions? Maybe. But it sure produces a very decoupled way of working - no high capacity web server with custom code to handle all these operations need exist, and new operations can be added in a very decoupled way.

The best advantage of all? As long as you're able to hit a web server - you can tweet, log a todo, post to your blog, etc, and much more. No more need to be connected to each of these different sources.

Next: do the same with email, so that the same functions are available when you send an email. Although you could implement that as an actual email processsor.

Not entirely addressed yet: security. Maybe making it https would suffice?

Sunday, February 20, 2011

Idea: An acceptance test framework for command line programs

This is an extension of the standard acceptance test framework pattern to the world of command line programs.
If you specify:
  • a set of scripts to run, 
  • a set of inputs and expected outputs, 
the framework runs them, and prints out the results of running the tests like so:




I have most of the framework working in another project that should be easy to extract out and make into a standalone project

Saturday, February 19, 2011

The problem with TDD for custom development

I had an idea for a cool app today. I wrote up the initial set of requirements, and came up with the high level "here's how the guts will work" to convince myself it can be done, and naturally started thinking of the architecture.

Soon as I had enough of the pieces "working" in my head, I thought "well  great, let's write some code. But maybe I should do it the TDD way. Hmmm..."

Here's the problem: TDD tools don't exist for what I'm trying to do. Its not a webapp, and it most probably is not in Ruby or Java. Or maybe I should write it in one of those languages then? I dont know.

My point is that TDD is probably not appropriate when you're still doing exploratory architecting of the application itself. You want to see the core pieces work, or you want to figure out what the core pieces even are, TDD will get in the way for one or both of two reasons:
  • You'll get hung up on the specification and miss out the architecting
  • You'll put your actual project on hold while you build out the TDD tools :)
Now TDD purists would say the specification will extract the architecture or clarify it, but IMO TDD is at its best when the second D implies automation.

As I've been writing this, I'm thinking there's another alternative: Plan for TDD - decide when you'll start doing it and with what tools, and which parts of the app are appropriate to use it for - but dont do it just yet until you know the architecture itself.


Thursday, February 17, 2011

Installs should be hudson-simple

Idly musing about the causes for Google Wave to reach the place it did, it stuck me that its new incarnation - the yet-to-be-unveiled "Apache Wave in a Box" - should be dead simple to install if it has any hope of adoption.

My gold standard for java apps that are dead simple to install? Hudson or now Jenkins.It should be as simple as downloading a jar and running it. Everything else should flow from there - including updates.

Idea: InstantQ

It struck me as I was waiting at the bus stop the other day that in urban living is a lot of standing in queues. The bus stop follows the honor system by default, and people are generous to a fault (at least where I live in Chicago), but could technology help?

What if you could start a queue using your smartphone when you arrive at the stop? Anybody arriving after you would look for your queue, and join it.When the bus arrives, it automatically notify each person in the order in which they joined the queue that it was their turn.

Now this might be too slow for a bus stop where once the bus arrives, everyone gets in in a few seconds, but any other place that involves waiting in a queue, this might help avoid queue anxiety.

And now for the commercial version: Any organization that needs to provide service to people has these elaborate systems that show the "Now serving" display could just buy the InstantQ software that would run on a PC and "host a queue". Anybody with a phone could join, and as long as we provide the ability to join a queue via text, we could cover most of the population.


Tuesday, February 08, 2011

Jack/Webster/Fluent: Use YAML as the text format

YAML seems like a nice fit for a text-based format for Jack or Webster/fluent. The things that attracted me to YAML are:

  • Strings dont need to be quoted unless absolutely required. This is a huge advance over json
  • YAML has references.
it still might not be the best way to represent code as data, but its close.

Sunday, February 06, 2011

Reality Driven Development

I've started reading Growing OO Software - Guided by tests, and two paragraphs in Chapter 1 struck me as interesting constrasts:
The catch is that few developers enjoy testing their code. In many development
groups, writing automated tests is seen as not “real” work compared to adding
features, and boring as well. Most people do not do as well as they should at
work they find uninspiring.
and a few lines later:
If we write tests all the way through the development process, we can build
up a safety net of automated regression tests that give us the confidence to make
changes.
It seemed to me that the first is grounded in reality and the second aspires to a idyllic future.

What if we came up with a methodology that actually assumed reality, and while we're at it, the worst possible one? The concept is nothing new - for eg, network design assumes the worst always - the internet's architecture is replete with strategies against the untoward happening while expecting it to.

So, any methodology expecting to better the software development process should expect that:

  • The average programmer doesn't want to write tests
  • The code developed will outlive any one person involved in its creation and maintenance
  • The architecture, design and implementation will always represent the forces that were in effect at the time of their being decided upon, and therefore will be exactly what those forces required them to be
  • Forces will change over time, and will pull/push the architecture, design and implementation in ways not expected originally
  • Architects, Designers and Developers are forces on their own right, and will do things "in their own way" despite any official or external mandate .
  • Evolution of the software will be more akin to frankenstein-in than darwinian, ie, all software will gravitate towards a Big Ball of Mud
  • Average developers will prefer tools to better practices, i.e prefer fixing instances of bad behavior to changing them
  • In a large enough organization, average developers are not cross-functional. They have a niche and are very happy in it. The exceptions do prove the rule. 
  • The average developer will tend to narrow the definition of his niche because his world is constantly expanding and its difficult to keep up.The only possible exception to this rule is interview time, when the developer will make an all out attempt to project an air of being a generalist.
I could keep going, but you get the general idea. That then, is Reality Driven Development.Nothing new here, I just gave a name to something we all know - kinda like Ajax :)

How to practice RDD, you ask? Well you already ARE - this is the status quo :).

If you're intent on changing that status quo for a better reality however, the first step is to accept the reality. This might be easier for the actual developers to see as that IS the reality, but for people intent on changing that reality it might be a little bit more difficult. I personally am someone trying desperately to close my eyes to this reality because it doesn't fit the ideal world of "how programming should be done". I'm guessing that proponents of Design Patterns, proponents of best practices of any kind, mature developers and TDD/ATDD/BDD practitioners would feel the same way. "If only we could get Joe Developer to see the light" seems to be the underlying sentiment; but accept we must.

Once we accept that this is how we actually build software, we can move in quite a few ways towards a better outcome, and again by extension from fault-tolerant network design, I present some ideas:
  • Quality doesn't have to be absolute: However your app currently works, it does. Don't let your next step be 100% quality. Instead focus on the next 5% increment.
  • A model of layers of reliable quality built over ones that aren't: Remember the OSI model where each layer did one thing right but was expected to do others not so well? And how layers above did those things right? This is an extension of that idea. I don't have exact suggestion yet on how this should be applied to the list of problems above, but it seems like this is the approach that any solution should adopt. 
  • Support over prescription: This particularly addresses changes in behavior such as TDD and BDD. Asking developers to change their workflow on its head is not likely to be accepted except by those already predisposed to changing it. Instead, make the adoption easy by providing support. For eg, why not create a tool that records the outcome of any debug session as a junit test automatically instead of expecting the developer to hand-write the test?
I realize that the ideas above are not exactly fleshed out, but I'm alluding toward an approach to software development that's grounded in reality, and aims at improving the overall maturity and reliability over time. I don't mean something like CMM, however, because its interpretation has almost always meant handing off the quality responsibility to an external auditor. I'm leaning more towards something like the agile manifesto, but grounded in reality. 

Note on CMM and its interpretation: I have found that CMM is more often than not interpreted as an organization compliance initiative, not as a means to measure maturity and improve. This is exactly opposite of the CMM's stated intent, and therefore can be ascribed to flaws in the interpretation of the model. The most visible parts of the CMM machine, however are always big, up-front audits and compliance checks.Its no surprise, therefore, that the average developer treats the CMM process with suspicion, and its outcomes even more so.

Note on interpretation in general: TDD, Agile and such best practices suffer the same issue of the gap between espoused ideal vs interpretation of that ideal by practitioners. RDD is a response to this gap.

Concept: Treat requirements as external forces, especially non-functional requirements

Along the lines of my idea to use FEM-like analysis on code, I realized that it should be possible to model the things that cause the architecture and design of the software over time as physical forces.

Case in point: My team is currently working on adding multi-lingual and multi-currency support to a fairly large app. Adding this predictably requires changes across the board, and led me to visualize all the places we're pushing and pulling at the current design/architecture to make this happen.

Could this instead be considered a single external force acting on the elements of the system? Can we study the "material properties" of the code under such forces and "sum up" the forces?

If we could do that, it would certainly move us out of the current dark ages where cost of change is still measured as a list of things to change per a specific change plan.

Saturday, February 05, 2011

Maintainability of code

Most of my career has involved maintaining code built by others. I'd wager that most developers' career has been the same, even if they didn't want to admit it. Sometimes maintenance goes under the name of updating your own code, so people delude themselves into thinking they're not maintaining code, they're just creating the next version. From a maintainability POV, any code that's built is legacy.

And yet most (all?) methodologies of software development either address completely start-from-scratch greenfield development, or propose ideal practices that mature, passionate developers who are in the field for the love of programming.

The reality, however, is that most development involves enhancing or maintaining code - usually somebody else's, and mostly crufty; and in all likelihood the crufty code is from somebody who's presence in the industry is a happenstance rather than a planned event. If you're lucky it merely represents somebody's state of maturity at the time of writing, and that person has now improved.

A comprehensive methodology for maintainable software, therefore, must:
  • To address the legacy code angle:
    • Provide mechanisms to create "good enough" comprehension of the code
      • But avoid attempts at large scale or totalitarian model-driven comprehension. Such attempts will always fail in the real world simply because there will always be something outside the model
      • That is, allow for leaky abstractions
      • That is, allow for manual steps. There will always be manual steps. The methodology should allow for that by exclusion or explicit statement.The former is easier.
    • Help identify what changes and what doesn't. Easier: help identify what has changed and not in the past and let the humans extrapolate to the future.
    • Provide a means to migrate from what is to what should be that's incremental.
  • To address the maturity concern:
    • Allow for different levels of maturity
    • Allow for the ability to define "good enough" architecture, design and code; and the ability to easily enforce it
    • Allow quick enough comprehension of these definitions
    • Allow for gradual adoption, and a means to measure progress
The usual solution is to relegate this to Software Engineering, which typically spirals into talks of CMM and suchlike - process heavy, people agnostic.

The reality, however, is that software development is largely a human effort, and precisely because it lacks the usual shackles of other human endeavors. A mechanical, electrical or electronics engineer will always hit upon some natural limit. Not so the software engineer. His limits are the limits of the mind. If you can think it, you can make it.

And therein lies the problem. If you can think it, so can a multitude of other software engineers; and each such mind can think of at least one variation to the same problem using the same essential solution. This is why I believe we will not see Software ICs any time soon. Most process oriented methodologies tend to gravitate towards this concept, or to the equivalent one of "resources tasked to do X will repeatably produce output of increasing metric Y".

Meantime, in the real world, human programmers are finding ways to game that system. 

As software practitioners, what are we doing to better this? There seem to be promising starts with the BDD and (to a smaller extent) TDD movements, and (in a less focused, but generic scale) with the general move towards declarative programming. There're some incremental gains from tooling in general, but those gains are largely in the area of reducing the steps that the interface requires you to go through to achieve any particular task. There's also some progress in architecture analysis, validation and enforcing that constructs such as DSM and its ilk provide - if practiced regularly.

However, all of these lean toward the mature, self-aware programmer. By the time it reaches Joe Developer, its an organization mandate, not something he WANTS to do. So the gaming cycle begins. This time, however, its insidious because it projects the impression that code quality has improved. We therefore need tools and methods that "work in the trenches". I don't have a good enough answer, but here're are some interim steps that I can think of:
  • Allow for easy documentation of what exists, and in such a way that the document follows changes. Working diagrams are a good way of documenting the subset of the code that's understood. 
  • Use tagging as a means of collation of documentation created thus. Its easy and "good enough".
  • Don't loose what you gain. Developers gain a lot of system knowledge during debugging. There's no easy way of saving that for posterity, so the knowledge is lost. Do not expect developers to write a wiki page - they don't have the time. This area is ripe for tools such as 
    • bookmarking and code traversal trails being saved. 
    • A "tweet my XApp-fu" app is what's required here.
    • A way to share all of this that's searchable
  • Make creating tests a 1-click affair. Any run of the software should be convertible into a test, especially a manual  during the normal write-compile-test cycle that the developer engages in.
  • Allow cheating on setting up the context. In most production systems, its not easy to inject mocks or test doubles. Test automation should allow for as little of the system being mocked out as possible.
  • Mindset changes:Somebody on the team needs to evangelize these:
    • "Works on integration box" is less useful than "PROVABLY works on any box". This implies CI without requiring buying into the CI religion.
    • Exemplar test data is better than just test data. Exemplars are test data sets that "stand for" a scenario, not a particular test run. 

Webster/Fluent/Jack: Create a text-based syntax as well

I realized that while I'm trying to usher in better programmer tools, most programmers might already be comfortable with their existing ones. So to foster better adoption, the simpler idea might be to allow a text-based syntax as well.

Hopefully, they wont be able to escape the wily wares of better tools for too long!

Friday, February 04, 2011

Idea: Queues: a visualization of work queues

I have been thinking about the concepts mentioned in Product Development Flow, and visualizing a dashboard based on the Queues mentioned in that book. An agile team could be represented minimally by a queue - the list of tasks in its backlog, and how the team processes that list. If we had a visualization of this, it would be easy to represent how well/efficiently the team is working

Extending this downwards, the team's performance could be measured in different ways: as a summation of each team member's work queue (which would allow per team member productivity and efficiency estimation), or as a summation of work completed, or (even interesting) as a summation of the different workstreams it's currently undertaking.

Extending this upwards, a set of teams or the entire organization could be measured in terms of the queues for each team, and the attributes of each queue chosen to be measured.

Speaking in software terms, I'm thinking of creating:

  • A system that allows the creation of a queue, and a visualization of it.
  • The ability to create parent and child queues
  • The ability to "sum up" based on arbitrary attributes of each queue

Monday, January 31, 2011

An easy launcher for redcar on Windows

I just (re)discovered Redcar, and have been playing with it. I love it so far (especially the seamless git integration, and the tree-document-view concepts) but I find unnecessary windows irritating, so I obviously didn't like that Redcar needed a command window open in addition to its own SWT one. Here's how I made launching Redcar a 1-click affair.
  1. Install redcar and make sure it works on its own. ie, typing redcar on any command window should start it up.
  2. Convert the redcar icon from png to ico using imagemagick. You obviously need a unix box for this, or have imagemagick installed on windows. Online tools are also available, but this seems the most programmable and quick - especially if you have a unix box around :)
     [user@box]$ convert -resize x16 -gravity center -crop 16x16+0+0 redcar-icon-beta.png -transparent white -colors 256 redcar-icon-beta.ico
  3. Create a windows shell script (redcar.vbs) that loads up redcar without a command window
        Set WshShell = CreateObject("WScript.Shell")
        WshShell.Run chr(34) & "redcar" & Chr(34), 0
        Set WshShell = Nothing
  4. Open up a launch folder of your choice. Mine is the Quick Launch bar.
  5. Rt click on the folder, New -> shortcut, and in the wizard that shows up, enter the following:
    1. Location: drive:\path\to\redcar.vbs
    2. Name: redcar
  6. Rt click on the shortcut -> Properties. Then change the following:
    1. Change Icon -> Look for icons in file: drive:\path\to\redcar.ico
  7. In the folder (or in my case, on the Quick Launch bar), Click on the Redcar icon. Redcar should now load up straight to the splash screen.
    Notes:
    • The one drawback with this is that you no longer see the debug information that the command window provides. I could not have know that Redcar looks for git during its launch without that output. So there's still reason to launch redcar from the command line. This method doesn't prevent such launches.
    • I noticed that there's a github project creating a windows launcher for redcar, but it seems to me like the steps above could be easily automated and the artifacts added to the redcar core distribution for a simpler launcher.Maybe I'll fork redcar and add them :)
    • If you're using Portable Git on Windows like me, the key to having redcar find git outside MinGW is to add the following to your path:
    drive:\path\to\PortableGit-xxx\cmd

    Tuesday, January 25, 2011

    Jack: Javascript should be the base language

    It just struck me today morning as I was getting ready that the base language for Jack - my idea of the next gen programming language - should be Javascript. The good parts, of course, but definitely javascript.

    This ensures that it has the widest spread of runtime environments, has the functional bits built in, and - as long as we keep the base concepts to the good parts -  none of the ugliness

    Monday, January 24, 2011

    Idea: A textmate rewrite in Fantom

    Textmate seems ripe for a rewrite to a platform outside of Macs, and what better language than Fantom?

    Sigh. Another project idea that'll have to wait.

    Sunday, January 23, 2011

    Idea: More Effective Workspaces

    Workspaces are invaluable in keeping focus on the job at hand. I've been used to Workspaces on Linux Desktops for a long time, and recently discovered a Windows powertoy - MSDVM - that does the same on windows. This idea is a couple that I've had that can make them more effective:
    • A per-workspace todo list: So that when you form a plan of what to do in that WS, the plan is visible always
    • A "Send todos to workspace X" feature: I frequently find myself remembering to do something official while I'm on my personal workspace. It would be nice to be able to send a todo to that todolist without losing context.
    • An "I'm stepping out" button with tickler for when you return : Click on the I'm stepping away shoudl allow you to enter quick text on what you're currently doing, so you have a high visibility "This is what I was doing before I stepped away" note on the workspace when you return. I constantly find myself doing this using a new notepad instance even though i have other todo lists - just so its front and center when I return. It would be nice if the workspace did it for me.
    Of course, all of this can be done with current software and some manual setup, but it would be nice if the todo lists were workspace-aware, so this works out of the box.