filed under: benjamin, effectiveness, geek, kristina, lent, liam, life, netbsd, perl, pictures, pkgsrc, programming, rainskit.com, religion, reviews, tagging, tru_tags, vegetarian
1462 days ago
Of course, that means that the affected people aren’t going to hear that they’re affected. Sorry about that! (I’ll tell personally the few I know.)
In fact, I’m likely to switch to ikiwiki …eventually. Textpattern seems to have lost its mojo, and there have been some long-standing issues with it (like no native tagging support!) that seem unlikely to ever get fixed. And I’m hip to the cool technologies now, so a more infrastructure-like framework (i.e. ikiwiki, with git) for my blog feels like a better answer. And schmonz volunteered to do most of the work :)
That also means I’ll probably abandon tru_tags …more than I already have. There hasn’t been anything to do with it in a long while, at least not that I felt was worthwhile to be done. Most of the features that remain to be implemented require a major refactoring of the core Textpattern code, and that just seems very unlikely to happen (by me or anybody else) any time soon. So hopefully it will remain useful to the people who still use it.
This year’s Lent
I have utterly failed at this year’s Lent give-up. I have been better at going to bed at a reasonable hour, sometimes for days at a time. But I simply can’t do everything I need/want to do in my life with the few hours that leaves me between work, kids, and chores. So sleep will continue to lose to projects – although less-so than it used to. There are some nice perks to getting more sleep – I’m much more on-the-ball and willing to take on mental tasks that otherwise seem hard. But that extra value doesn’t offset the lost value from just not being able to do all the things I need to do.
Speaking of Lent, I also broke a 5-year streak of vegetarianism a week or so ago. Benjamin, Liam, and I had some extremely delicious tilapia, also breaking both boys’ life-long vegetarian streaks. Kristina chose not to participate.
We had a bunch of reasons for deciding to do it. And a bunch of reasons to not do it (i.e. to stay vegetarian). I may blog about all the tradeoffs some day soon, but for now, suffice it to say that it was a very close decision, and I’m not sure what’s next.
I made a web app!
If you recall, I started using SmugMug for my online gallery a few years ago. But when I made the switch, I left behind an old gallery site (on Menalto Gallery 1) that I’ve been meaning to clean up for a long time. It broke a while ago, motivating me to finally migrate off that old software – to ZenPhoto, which had been my long-standing plan. It took a few days getting ZenPhoto to work (when it should have been easy!), but I got it there, and I shut off the old site.
I also started this exchange with the ZenPhoto dev in which I start by being too grumpy and then he finished by insisting that his software simply must be unsupportable for him to support it. Net effect: I had to get off ZenPhoto.
But I had no alternate destination for self-hosting my images. My long-term goal is to migrate the images to SmugMug, but I want to filter them down from “every picture I took during that time period” to “just the best ones, tagged and rated” (like all the other pictures I post to SmugMug). And it will take Nathan-weeks of work to get that done, so it keeps getting put off. So in the short term I just needed a new self-hosting product, and there just aren’t any good alternatives. They’re all either old or ugly or badly designed or some combination of those three.
So I made one myself. I’ve never made a web app from scratch before, but I am quite comfortable in perl, had used Catalyst from a prior job, and I’d heard then that Mojolicious is better. So I tried it.
And wow, was it easy. Probably 8 hours total from “install mojolicious” to “the gallery is up and running on the new software”. That’s only just a little more than I spent trying to get ZenPhoto to work. Many kudos to Mojolicious, perl, and pkgsrc.
Now… ZenPhoto does way more stuff. (TONS more… too much, actually.) And this new software isn’t really ready for someone else to use it. And it has no tests. And it only does one extremely simple thing (i.e. serve nested directories of images, in name-sorted order, with no metadata).
But the code is small, easy to read, and easy to modify. (Roughly 300 lines of code, 115 lines of CSS, and 80 lines of HTML template.) The site looks really good (in my opinion). And it doesn’t require a database – just a directory full of images. And with some app-level caching and the help of Mojolicious’s preforking web server and great documentation for setting it up under apache mod_proxy, it’s about as fast is it could possibly be on my old host and slow network connection.
So ZenPhoto is out and my home-grown software is in. Here’s hoping it doesn’t need maintenance!
Add a comment 
filed under: agile, effectiveness, product management, programming, reviews
2336 days ago
I just sent this note to Pivotal Labs:
In a world where everybody talks about “agile” but hardly anybody knows what they are talking about, and where it is very rare to see an agile team (by which I mean a group of people who are actually a “team” and are actually “agile”), and where product managers usually struggle to even maintain control over prioritization, let alone actually manage it well – it seems very unlikely that any software would exist that is designed to work in well-functioning agile teams.
And it seems impossible that that such software, produced for such a tiny market, would be brilliantly designed, brilliantly executed, and just always there when you need it. For free.
So I don’t know how or why you do it, or what the world did to deserve it, or why I was lucky enough to find it. But THANK YOU, THANK YOU, THANK YOU, for Pivotal Tracker.
filed under: benjamin, effectiveness, pictures, programming, rainskit.com, reviews, smuganizer, usability, weaknesses
2585 days ago
As I mentioned in the announcement, I have a temporary photo gallery set up with some early pictures of Benjamin in it. But I password protected that gallery, not because of any particular security or privacy concerns, but simply because the gallery is not in its final home, and I don’t want to publish the gallery to the wider internet until it has reached said destination. Recently, a friend asked about the delay in posting more pictures, and offered to help resolve any problems that might be impeding progress. I wrote a very long reply, which I have quoted (mostly) below.
It is, I think, and interesting way to both reveal why I haven’t opened up the gallery, and to allow my readership to understand more about me. Because in this email, it is clear how my perfectionism and my pragmatism do battle, and how I usually seek to resolve such conflicts.
And if you do take the time to read all the way to the end, please feel free to provide any suggestions!
Let me explain the root problem(s):
I plan to switch my pictures from gallery.rainskit.com (which uses Menalto Gallery) over to use SmugMug, and in fact have already paid SmugMug for a year of service which has already elapsed. (I signed up over a year ago.)
I don’t want to start dumping Benjamin pictures into Menalto; I have numerous other albums (like Thanksgiving from last year) that I haven’t uploaded to Menalto because I told myself that I was going to force a hard stop on using Menalto, to encourage me to finish my switch to SmugMug. So I don’t want to break that rule for Benjamin, and I also don’t want to publish one URL for Benjamin pictures and then change it to another URL later.
I don’t expect to be able to use gallery.rainskit.com for my SmugMug site, because I have other users of my Menalto gallery who won’t want to have the URL change out from under them. So I’ll have to leave Menalto at the old URL, and come up with a new URL for SmugMug.
When I tried to convert my gallery over to SmugMug, I discovered a (frustrating!) limitation of SmugMug wherein it doesn’t allow infinite nesting of albums. Specifically, it forces me to organize my pictures in a particular hierarchy, either:
Category -> Album -> Image
Category -> Subcategory -> Album -> Image
So some of my Menalto albums are nested 5 or 6 layers deep, which won’t fit into SmugMug’s paradigm. Also, some of my Menalto albums have both images and sub-albums, which won’t fit into SmugMug’s paradigm.
So a long time ago (April of ’09) I started work on Smuganizer, a tool to help me convert my Menalto gallery over to SmugMug. That tool has grown into a fairly awesome product, but it isn’t quite done yet – mostly because it has a few important missing features, and the documentation is out of date (and misleading!). Note, however, that SmugMug has given me a free Pro account for as long as I continue to maintain Smuganizer, so I don’t currently have to pay for my SmugMug account.
And I’ve been using my SmugMug site as the test database for Smuganizer, largely because I don’t have any other available SmugMug account. So my current SmugMug site (which is entirely password-protected) is filled with random test data, and in unsuitable for public consumption.
Concurrently with all of this, I discovered Windows Live Photo Gallery, a free app from Microsoft that (finally!) just works the way photo gallery apps always should have worked. Really. I have always hated photo management apps, up until this now. Now, I tell people that they should use it. (It does have some major flaws/gaps, but they are not sufficient to keep me from loving it anyway.)
One of the major features of WLPG is that you can tag people in pictures (like Facebook) and/or add arbitrary tags to images and/or give ratings (1-5 stars) to images, and then instantly browse your whole library by those elements (plus by date). They also make it really easy to publish selected photos to arbitrary photo sites, like SmugMug. So suddenly I have a really strong desktop app for managing my pictures, and I find myself caring much less about putting my entire photo library online.
So I modified my plan about converting from Menalto to SmugMug, such that I have decided instead to download all my Menalto pictures to my computer, tag and rate them all there, store them there primarily, and only upload the best ones to SmugMug. In other words, use SmugMug much like a normal human would use a photo gallery.
Problem is, that takes a lot of time. I’m only about half way through my existing pictures. And I’ve been working on it for 6 months or more.
Note that this also makes Smuganizer largely irrelevant to my current needs :) (Except that Smuganizer can also be used to upload pictures from my computer, and to manage the pictures once they are on SmugMug, so it does still have value to me.)
Note that this also means I won’t have an off-site backup for my entire gallery any more (like I had when you were hosting my gallery). To solve that problem, I signed up for Carbonite.
Net effect, I have a bunch of things that theoretically need to be resolved before I start posting more Benjamin pictures to SmugMug:
a) Finish tagging my existing photos
b) Finish and publish Smuganizer
c) Delete all the existing stuff out of SmugMug
d) Figure out how to organize my SmugMug gallery
e) Get SmugMug set up on its permanent URL
f) Upload my ‘featured’ pictures to SmugMug
g) Upload the new Benjamin pictures to wherever they fit in that structure
Of course, I recognize that this will take a year or more, and that Benjamin pictures can’t wait that long. So I figure I have a number of options:
1) Abandon Smuganizer, don’t worry about the other pictures, and just clear out SmugMug and upload Benjamin pictures for now. That would only require steps © (d) (e) and (g) and could probably be done in a few hours.
2) Try to split my SmugMug gallery into a few “Testing” categories and then “everything else” and just password protect the “Testing” categories. Go ahead and upload the Benjamin pictures into their final home, while concurrently working on everything else.
3) Some other option I haven’t thought of yet.
4) Follow the original plan and just wait until it is all done before publishing more Benjamin pictures.
5) Publish the Benjamin pictures on the Menalto gallery.
So I figure you can help in a few possible ways:
i) Talk me out of the tree and just convince me to do (5)
ii) Help me with (d) so I can do option (2)
iii) Come up with an idea for (3)
iv) Talk me into (1) (Note that this is probably impossible)
So you can see my dilemma :)
Add a comment 
filed under: business, effectiveness, intelligence, links, programming, weaknesses
2699 days ago
There are many people in my industry who are “smart” but are often unable to actually be effective. I have numerous examples: developers who can’t balance perfection and progress, entrepreneurs who can’t see that their idea is useless, executives who can’t see the inevitable failure of their plan, and people who just can’t figure out how to turn their great idea into something real. I run into such people, in varying degrees, nearly every day.
In fact, I have struggled with this myself. When I first started my career as a developer, I had a hard time balancing the intellectual purity of an idea against the “messy” path to actually bringing that idea to implementation. It’s hard to accept that the perfect idea really isn’t feasible, and instead opt for something less-perfect in order to actually get something done. But I have learned this lesson (repeatedly!), and much of my success in business has come from learning to understand and accept that some progress toward a slightly better place is much better than no progress toward a perfect place. In fact, I’m now more often a proponent of the other side of the coin – I’d much rather just do something (useful) than try to engineer a perfect solution. So long as smart, capable people are involved in the doing, the end product is usually awesome.
So I am very intimately aware that “high IQ” is not the same as “highly effective.” I’ve known it for a long time, but I’ve never been able to clearly understand exactly why that is. Well, Keith Stanovich figured it out for me. He studied this issue, and learned something relatively obvious – that IQ is a measure of intellectual capacity, but capacity is not the same as ability to use it. (Size doesn’t matter, right?) He uses the term “rational thinking” to describe the ability to use intelligence to solve problems, and this article at New Scientist covers the topic very well.
Go read that article. It will hopefully help you understand that IQ is only somewhat related to success, and that rational thinking is more important. And rational thinking can be learned, and improved on, relatively easily. So there’s hope for all of us, to actually learn to be effective!
Having read that article, I am pleased to have sorted out an intellectual conundrum, but I’m also somewhat embarrassed – I’ve been teaching people this idea for years now, and I just didn’t realize it. See, when I teach people what to look for when interviewing, I refer them to Bloom’s Taxonomy, specifically to the six levels of cognitive skills:
To be successful in the roles I’m usually hiring for (Analyst, Project Manager, or similar), the person needs to be highly capable in the top three levels – analysis, synthesis, and evaluation. There are good ways to try to evaluate those things in an interview, and I have a very specific set of interview questions and activities that try to draw them out. (This idea has worked very well, by the way – I’ve been very successful at interviewing and hiring, using this approach.)
So it seems to me, now that I’ve thought through the idea of rational thinking, that Bloom’s Taxonomy isn’t really about intelligence at all. Instead, it is focused on the skills required to apply intelligence effectively. That is corroborated by the fact that the Taxonomy is often used in education as a way to judge how well a student is learning fundamental skills, and not as a way to judge their intelligence.
So the embarrassing part is that I’ve been using Bloom’s Taxonomy (and teaching it to others!) as a way to evaluate people’s effectiveness, all the while trying to understand why high-IQ people aren’t always effective. If I had just once put the two ideas next to each other, I probably would have figured out the answer for myself. Huh.
Maybe that’s just proof that I still need to work on both, myself ;)
P.S. – I also owe a big debt of gratitude to the late Mrs. Lilly, the teacher who taught me about Bloom’s Taxonomy in elementary school, and who I know was responsible for accelerating my early development in analysis, synthesis, and evaluation. Thank you, Mrs. Lilly!
The image of Bloom’s Taxonomy was reused (from Wikimedia Commons) under the Creative Commons Attribution ShareAlike 3.0 license
Add a comment 
filed under: programming, rainskit.com, tagging, tru_tags, usability
3219 days ago
I’m working on the next version of tru_tags and one of its major features is the ability to create a tag-based archive page, like this one. After implementing the feature, I tried it out, and I liked it so much that I decided to use it on this site. Specifically, I removed the old “About” page, merged some of that content into the “Links” page (and generally edited that page), and put the “Archive” page in where the About page used to be.
This should all make sense if you look at the menu links at the top of the site. For those of you reading this via the feed – click here to see it.
It’s a really interesting page to browse through – I find it strangely fascinating to see so clearly all the articles I’ve written, and how they clump together. I’ve also used it a few times as a faster way to get to a specific page. It’s somehow more powerful than the normal tag cloud, functionally and emotionally, and that surprises me.
Or maybe it’s just late :)
Add a comment 
filed under: family, geek, netbsd, programming
3734 days ago
Waaaay back in April of 2006, my cousin Karen (and her husband David) had a hard drive failure, resulting in the loss of all their digital pictures of their baby daughter Ella. They didn’t have backups, and they didn’t have prints. Computer shops weren’t able to help, and clean-room data recovery was too expensive. Eventually, they gave up on it and sent an email to the family asking if we could send them copies of any pictures we had.
I, being a geek, offered to take a look at the drive and see if I could get anything off of it. They, lacking any better options, sent the drive to me so I could give it a shot.
I dropped the drive into my NetBSD machine and sure enough, the BIOS recognized it. That meant that the drive was still physically working, and that I might have a chance at getting whatever data was left off the drive. I was able to mount the drive (read-only) and read data from it, which meant that there was a good chance of finding at least some of the pictures.
I did a bunch of googling and learned a lot about home-brew data recovery. To sum it up, I learned that you need:
- A copy of the
ddutility that supports the
- A handy program that knows how to find images on a raw drive image. (See below)
I was able to make a copy of the bad drive (onto a good drive of mine) using
dd and the instructions on this page. There were a lot of bad drive sectors encountered during the copy, which was to be expected given the fact that the drive had failed in the first place, but I was hopeful about finding at least some of the images. And having the drive copy was a big win – it meant that I could work with a copy that wouldn’t get worse if the hard drive took a dive.
The hard part was finding a program that could recover the images. For a long time I thought I was going to have to write my own, and that seemed a daunting task, but I finally found this guy who had this exact same problem and already wrote this program. It was exactly what I needed – a program that would find and extract images from a raw drive!
The only problem was that jpg-recover was running at about 15kb/s. At that rate, getting through the 80gb hard drive would have taken about 60 days to finish. I didn’t have 60 days (without interruption!) to wait.
So I dug into the code and discovered that it was horribly innefficient. It was reading data one byte at a time, checking for an image after each byte read, and just generally not being smart about performance. I set about improving it.
I was able to do so. My version takes more memory (a configurable amount) but it runs much faster: at about 12000kb/s. That’s 800x faster :) At that rate, it only took about two hours to finish, finding (after some tuning) 4,422 potential images, of which 187 were uncorrupted pictures of Ella – including this one.
That felt good :)
I’ve published my version of the program as jpg-recover-faster. It’s a perl script, so you’ll need perl. I make no guarantees about the lack of bugs – use this script at your own risk. It may eat your children. ;)
You’ll want to read the comment at the top of the script before using it, and the other pages listed above will help you figure out how to use it. Feel free to post comments here with questions or suggestions.
Add a comment 
filed under: geek, programming, textpattern
3761 days ago
I recently submitted a patch to Textpattern to refactor some of the code to eliminate a lot of duplication and allow me to generate tag-based RSS feeds from my plugin. My patch has been applied to the “crockery” branch and will be a part of Textpattern 4.1. Cool!
Add a comment 
filed under: geek, programming, textpattern
3773 days ago
(In case the title didn’t warn you, non-geeks probably won’t be interested in this article.)
Textpattern has a “feature” whereby HTML tags are completely stripped out of visitor comments, presumably to close a common security hole. I’ve created a patch for Textpattern 4.0.4 that changes it such that HTML tags in comments are allowed, but the relevant characters (<, >, &, ‘, “) are converted to entity codes and therefore aren’t treated as HTML delimiters and therefore aren’t a security concern.
Net effect: it’s possible to post HTML code examples in article comments. I’ll post such a comment right after I post this article, just to prove the point.
I’ll be announcing the patch in the txp-dev mailing list and on the forums, and inviting Textpattern developers to use this article as a place to test such comments. Hopefully it will be merged into the next official Textpattern release, so I don’t have to maintain this patch for more than a few months.
Add a comment