2013-01-22

Programmers and Practice vs Training

Lately I've been trying to learn one new thing every day.   And one of the sad discoveries that I've made is that I'm capable of forgetting things almost as quickly as I learn them.

So, two months after I learned how to write vim macros - I've already forgotten the specific keys used to define and run them.   Now, I can re-learn this very quickly - I've got good notes, the memories are just slightly hidden, and I haven't forgotten any concepts, just simple keys.   But this will slow down my use enough that I probably won't pull this tool out when I need it.

This got me thinking about how I needed to complement training with some  some repetition, some practice.   That just learning something isn't good enough.   This is exactly what martial artists do - they would call these katas.    It's also what musicians do - they would call these scales.


2012-10-10

Rediscovering the Benefits of Simple Design

Recently, I met a coworker from almost twenty years ago whose clearest memory of our time together was our discussions about design.   And how I got us all to make a field trip to the break room to take a look at the microwave oven there.

It had a dial and no buttons.  Pull the handle and it shut off the element automatically.   I loved the simplicity of this.   I loved how it made no demands of the user, and anybody could immediately put it to use.   There was no training, no documentation, no "insufficiently skilled users".

At the time we were rolling Microstrategy out to hundreds of users.    Microstrategy is a ROLAP (Relational On-Line Analytical Processing) tool that once you provided data in a relational database within a star-schema, defined that to Microstrategy in the form of metadata then any user could use it easily - they could quickly create new reports by dragging and dropping element names and it would generate the SQL for them.   It was a very powerful tool that in the right hands could achieve amazing results.   Prior to our roll-out of this tool the backlog on reports for our organization was ten months.   After we rolled it out I signed onto and delivered an 8-hour average SLA for the creation of new reports.


2012-09-20

In Praise of the Embaressingly Simple

Recently, while having lunch with some former members of my project the conversation drifted to some of the old code that's still around.   These guys are incredibly good programmers, and so many of their contributions are still running today - four to six years after they've left the project.

One of the items that we discussed was our "batch broker" - a process responsible for handing out unique batch ids - that uniquely identify processes, end up in logs, in audit tables, and sometimes tagged to rows in the database.

We laughed about how embarrassingly simple this process was: just a few dozen lines of python code that
  • open up and lock a file
  • increment the number within
  • close & lock the file
  • log the requester & new batch_id
  • return the batch_id
Our myriad batch programs (transforms, loads, publishes, etc) then simply call a bash or python function on their local system which calls this program remotely over ssh to get a new batch_id.   Total amount of code is maybe 50 lines across all libraries.

2012-09-10

Learning 1 Thing Every Day

When I was 18 and a programmer in the USMC I decided the best way for me to become skilled was to learn one thing every day about programming in addition to my daily duties.   I recruited a colleague and each of us committed to learning and sharing our discoveries.

By learning I don't mean just reading about some feature or method, but instead studying it to the degree necessary to be able to easily apply it later.   Fitting this extra work into our schedules meant that most of these discoveries were fairly small.    But they accumulated and built upon one another very quickly.    Perhaps more importantly this strategy positively affected our daily outlook by helping us frame our day within an optimistic, learning context.

Decades later I'm a mid-career technologist who tends to neglect my technical skills while focusing on organization, communication, and resource issues necessary to get projects successfully deployed.  So, I've decided to resurrect this strategy to resharpen my skills and inject some more fun into my day.   I'm going to use this blog to help me track these items and summarize the impacts.

2011-03-18

Data Warehouse ETL for Data Scientists

At Strata I attended a discussion panel in which a number of speakers described the various types of work involved in data science:  data scrubbing, data analysis, presentation, etc.  The general consensus was that data scrubbing was the most time-consuming task in data science.   I've also found this to be true on data warehousing and data mining projects, so no surprise here.

The most interesting part of the discussion was when an audience member asked if the panel could recommend any tools to help with the data scrubbing.  The answer was "no".

I spoke with the panel members afterwards and found that they were completely unfamiliar with data warehousing and of course, ETL.  So, appears the author of "Data Analysis with Open Source Tools".  So, is just about everyone I've met working in this field.

Of course this is mostly just because data warehousing came from the database community and the new interest in data analysis has come from the programmer community.   There's certainly no problem in having a different community re-explore this space and possibly find new and better solutions.  The problem is that the more likely scenario is a vast number of failed projects that fail because of performance, data quality, or maintenance costs associated with solving this problem poorly.

2011-02-14

Analysis and the 'So What' Question

While at Strata I had an opportunity to participate in quite a few sessions that demonstrated how to take raw data and analyze it with various tools.  The output was usually a set of graphs, charts, etc, though sometimes just simple tables.   All of this was useful to get a sense of how the tools work, but what was missing was the final step in the analysis - a powerful insight or understanding that one could use to make an intelligent change to a process.   Generally, the presentation technique was fine, the tools were great, but the demonstrated impact of the tools was trivial.

One reason for this is that some of the presenters may have to hold back on their most significant discoveries until the right time - and this just wasn't that time, or this wasn't the right audience.  I can understand this - since most of my best analysis can't really be shown without getting NDA and other agreements in place first.  Another reason is that the presenters might have wanted to focus on the tool and not the data or business being studied which is just serving as a necessary example to work on.   But this is misguided, since delivering insights is the bottom line - not delivering pretty pictures.   The last reason I can imagine is that delivering powerful insights is hard, and while these presenters are working on it they may not yet have a suitable example.  And I think that this is the most likely answer.

My concern is that people spend a lot of time building gorgeous but empty-headed analytical solutions that just don't have much to say.    This is pretty similar to the chart junk problem that Edward Tufte complains about.   To make this a little more clear I've included a few examples below.

2011-02-10

Breadth of Data vs Depth of Analysis

One of the things that I felt was missing from O'Reilly's Strata Conference was a nuanced sense of the trade-offs between complex analysis and vast volumes of data.  Because there is a trade-off and I've seen it play out consistently.  It works like this: where do you spend your investment?
  • deep analysis - with unpredictable costs and benefits
  • broad sets of data - with predictable (high) costs and benefits

2011-01-28

Buy, Reuse or Build ETL Software?

While talking to someone today he mentioned a concern about my team's "homegrown" software: that it would nickle & dime us to death compared to "more robust commercial software". I respected this guy - he was very bright and had a lot of successes under his belt. But I also felt that he was both echoing a common corporate perception, and was quite wrong.

I've run into this notion so often that I now plan for it: in the minds of many commercial software has more credibility than open source software, which in turn has more credibility than custom-built software. And since these perceptions are often held by those that control my budget - perceptions matter.

2010-12-26

Mashups vs Data Warehouses

Mashups have come from the application side of IT, warehouses from the data side. They overlap quite a bit - but there's not a lot of thought of how to leverage the best of both worlds.

2010-12-23

Parallel Database and Hadoop Costs

In all the hype around Hadoop, and maybe the "micro-hype" around parallel databases it's pretty easy to find exciting anecdotes to support these architectures: numbers of nodes in a cluster, speed to calculate or move data, etc. Finding the costs is much more difficult - and without the costs how does someone make a decision on the merits of the solution?