I was reading the Wikipedia page on Information Theory as part of my read the wiki project, when it struck me that there's possibly an objective way of measuring the effectiveness of text vs visual programming languages using Information Theory.
The central concepts (whose math I'm admittedly unable to fathom) are those of information, information rate, entropy and SNR. One of the age-old cases for text-based programming ( and therefore against non-textual programming languages) has been that it has very low SNR and the "information density" is high for given screen real estate.
Is that really true, though? How much "noise" does syntax add? On the other side of the spectrum, I've seen infographics that assuredly deliver more "understanding" in the given screen space than the equivalent textual description. Is it possible to design an "infographic-style" programming language that packs more power per square inch than ascii?
It would be interesting to do some analysis on this area.
The central concepts (whose math I'm admittedly unable to fathom) are those of information, information rate, entropy and SNR. One of the age-old cases for text-based programming ( and therefore against non-textual programming languages) has been that it has very low SNR and the "information density" is high for given screen real estate.
Is that really true, though? How much "noise" does syntax add? On the other side of the spectrum, I've seen infographics that assuredly deliver more "understanding" in the given screen space than the equivalent textual description. Is it possible to design an "infographic-style" programming language that packs more power per square inch than ascii?
It would be interesting to do some analysis on this area.
No comments:
Post a Comment