Profile cover photo
Profile photo
Trek Glowacki
Trek's posts

Post has attachment

Post has attachment
Happy DecEMBER everyone!

Last year I gave away two coupons for the Ember.js Peepcode ( and this year I'm at it again.

Code School has released a great beginner course for Ember.js and I'm giving away TWO 1-month memberships to Code School that include a special lifetime access to the Ember.js course.

To be eligible for one:
1) complete the first level (which is open to everybody)
2) tweet about your achievement using code school's share button
3) post a link to the tweet in the comments below along with a description of what you want to build next and why you think Ember.js will help you get there. 

I'll select the winners by applying a special scientifically proven method (i.e. whatever tickles my fancy) on December 31st.

Happy Holidays!

Post has attachment
I have TWO coupons for the latest PeepCode on Ember.js.  If you'd like a shot at one, let me know in the comments what you want to build next and why you think Ember.js will help you get there.

Someone asked me how open source projects were organized and how they could help:

It's funny you should ask this. I've usually find myself asking the exact opposite question: how does a company maintain momentum with a primarily financial reward structure?  The problem with money, especially in the tech industry at this moment is history, is that it's just so easy to get. Hiring is basically impossible right now and I don't know anyone legitimately concerned about losing their job and not being able to find another one within a week or two.

Not that I think fear is a good motivator, but neither is money here.

Late 90s bubble notwithstanding, the growth in the tech sector has created a class workers who did not previously exists: the engineering day laborer. Folks who know they can approach six-figures with a college degree and two years of work. There's a lot of overlap with the people, but the big differentiator is: care and attention.

It's possible to limit your work to sane business hours and still care deeply about craft, architecture, code quality, and good process. It's also possible to give exactly 0 fucks, tick off your task list in the fastest way, and leave the mess to someone else. Often, the people defining expectations and rewards (managers, clients, etc) lack the expertise to adequately measure output in ways other than "is the task complete? does it work?"

I'm frankly baffled how any commercial project doesn't collapse under its own weight within two years. 

So, for me, it's obvious how large open source projects sustain momentum: it's a meritocracy of people who care. For them, this isn't work, so monetary compensation (except maybe to cover ongoing project costs like hosting) would be crass.

The reasons people care will vary. 

Most often I've found they care because they want to use the tool they're contributing to: it makes their job easier and more pleasant. It's important to them to leave behind clean, organized, sensible projects and they know they can't do that by writing all the code in a project on their own. So, they seek out externally supported projects. If those projects need help, it's easier and more future-proof to contribute back to the central project than to address it one-off in your own code.

People also care for social reasons. Successful open source projects always accumulate a community. I worked with the other Ember core people for a year before I met any of them in person, but long before that considered them friends. Those friendships are every bit as meaningful and special to me as ones in other parts of my life. 

Even if I weren't using Ember in a project currently I'd still hang out in the campfire and commit code and ideas because I like, respect, and greatly admire the people involved.

As for how this community is organized, it varies over the life of the project:

1) In the beginning OSS projects start as the idea of a single person or small group of people who write a very basic example. Then they start shopping it around to colleagues. Projects at this state fail for one of two reasons: the underlying premise of the project is busted in some way or the concept is great but the initial maintainers lack enough social capital to get the attention of the right people.

At this stage, the initial developers might keep limping along for a while, but eventually the project will be eclipsed by other similar projects with a better premise or community. There are tons of these projects in github: "Last commit 3 years ago."

2) An initial small group of people will see the underlying correctness of the project. Basically, they have the vision to the see project as it will be rather than as it is. That future, potential form provides something that doesn't exist and that they need. So, they begin contributing. These people often are higher skilled than the typical developer and become trusted lieutenants, bringing a unique perspective or expertise in a focused area that drives the project forward.

At this stage a project can fail if this small group doesn't have a consensus on the project's direction and it dissolves from infighting.

3) The project begins to get the attention of the tech community at large, most of whom utterly despise it. This is where you start seeing blog posts like "Why X is a bad idea" or "How Y is better than X", "X can't scale", etc.  These folks are looking at an incomplete and in-progress that doesn't offer advantages over their current tools; they can't yet see the future utility or fundamental differences that might lead to progress. They're also tied to existing tools and practices that took effort to master and the thought of yet-another-fucking-library-that-does-X is frustrating to them.

The flip side of this coin is the people who do see the future potential can have a MASSIVE blind spot to the fact that the project isn't there yet.  They fight back like an over-reactive immune system to criticism they see as wrong-headed. Internally, calmer heads will always suggest "haters gonna hate" and that the contributors should focus on shipping a great project to prove them wrong rather wasting energy responding.

I'm not sure either approach is right. At this point a project survives on attention. It sucks when someone maligns your hard work or calls you an idiot in public. But, by doing this in public, they're giving a project exposure. People who would otherwise not have heard of the project will investigate. Most will buy the misinformed opinions and shy away, especially if those opinions don't disrupt their world view ("I picked Y, and now X is out, but everyone says X is bad. That must be true, because I picked Y and I'm smart").

A smaller set of people will be intrigued despite the nasty opinions. These people are fucking gold. Fucking. Gold. They represent the programmer who will become the typical user of your project. At this stage the project is closer to its future form so more people can see its potential. 

Usually these folks are stronger developers than average, but lack expertise in the specific area your project addresses.

They're going to provide good non-expert feedback. At this stage the initial maintainers and trusted lieutenants are too close to the project. They understand the problem space too well (or think they do). What seems obvious and easy to them may baffle early adopters. The new developers will challenge these assumptions. 

At this stage the underlying architecture is solid but poorly organized or relies on too much secret internal knowledge. In other words, the public API sucks.  An injection of new developers forges the raw materials into a good final form.

Projects at this stage will fail for one of three reasons.

First, nobody responds to criticism. This sends the message that the critics are right and will drive away new blood.

Second, the reaction to negative feedback is itself overly negative. There's a tendency to react to asshats by One-upmanship  of their asshatery.  This gets your community a negative reputation ("Those X guys, they're all such cocky assholes") which will drive away new blood or, worse, attract assholes.

Third, the opinions of the new blood are dismissed ("You just don't understand yet...."). You might win these people over, but your project will not improve and become approachable by, and helpful to, a larger group of developers.

It's very difficult to identify projects that "fail" at this stage because the failure state looks like a form of success. There's a sizable group of regular users and decent forward momentum for them. The project probably hits a 1.0, maybe a book is written, but the maintainers are always baffled why other, similar projects eventually end up with such energy and attention ("Y? We've been doing what Y is doing for like three years now! Why is everyone suddenly interested in them?").

Eventually, competing projects surpass your own and even the initial maintainers quietly move on. The project keeps chugging along under the maintenance of people who are stuck with it ("Joe's Widget Factory invested heavily in X, and there's no way we can do a rewrite") but all the energy and vision has been drained away.

4) Growth. The project attracts more people who, despite its flaws, find it incredibly helpful in their work. You'll start to see about a 50/50 split in comments on the project: "X is awesome. I get it now", "X seems OK, but I'd rather use Y". Now that a community of early adopters is growing, there's money to be made by third parties. You'll see tutorials, screencasts, several books (probably by O'Reilly), often targeting specific demographics: "Enterprise X", "X for Designers", "Common-Task-That-Used-To-Be-Hard with X", "X for Beginners".   

As the community grows, new interesting use cases come to light. Parts of the API start to seem crazy in hindsight and major improvements are planned for the next version to make the architecture support these new uses. Young companies will begin to select the technology for their products, the number of jobs specifically requiring knowledge of the project starts to uptick.

There will be a conference ("XConf") that is fairly small (100-200 people), but future conferences will grow and potentially become regional. Blog posts with good feedback start sprouting ("A better way to organize something-something in X"). Some of these will be eye opening enough to drive major API changes for future versions.

An ecosystem of related libraries starts appearing. There are dedicated blogs, podcasts, Meetups.

External criticism become weak ("I just think X isn't for me") or ad-hominem ("Nobody uses X for serious projects, just these stupid startup 'craftsman' idiots. Y is battle-tested and the best solution out there").

A project can hum along nicely in this stage for years. Projects here will fail when they hit that-one-idea-so-crazy-it-just-might-work! that doesn't work. The maintainer are blind to it.

5) Shark Jumping. The project has veered off into a crazy direction. A sizable number of core contributors have eyebrow raising, head shaking moments about future direction.

One of two things will happen: the project will fork or core contributors will just stop contributing and quietly disperse.  A few bright minds will stick around to pursue the crazy ideas but most of the heavy hitters go.

6) Decline. The project has become weird, specialized, heavy, unwieldy, or some other adjective nobody likes. Other, similar projects have caught up and offer delightful new strategies or cherry pick the best strategies from your project. The world has moved on. The best and brightest have moved on. Your community is made up of people who have used the technology for years, are comfortable using it, and have better things to do with their life than chase down the hot-new-library-of-the-month. The API is "battle-tested" and "well-known." Picking your project is an "obvious" choice so new projects are started with it, but almost by default.

Projects fail at this state when, like dying suns, their internal energy can't drive them anymore.

7) Death. Few people use the project for new products or companies. Many people still use it, but mostly to support legacy systems that are too big to replace. Bugs are fixed by spaghetti coding over them. 

The circle of life.

Post has attachment
A few days ago I tweeted about a testing workflow I enjoy for writing server applications that I wish had an equivalent in writing browser client applications:   
In response people helpfully pointed me to unit testing libraries (Mocha and QUnit) or acceptance testing tools (CasperJS and Zombie.js).

These are good libraries and I've used them all at one time or another but they're just pieces in the largest browser application testing puzzle, not full solutions in their own right.

140 characters and a link are obviously too tiny to describe my woes, so I'm fleshing out my win-list for a browser application testing toolset. Some of these items can be handled already, others can be handled by some tools but not all, and the entire list can be cobbled together today if you invest enough effort and don't mind the pieces being a tightly coupled hack instead of a designed process.

I'd consider it a testing win to cover all of these points in an integrated and extensible process:

1. Testing is not tied of a particular server stack.
When I'm writing client applications entirely in the browser the server is just a data API. I'll probably mock or record interactions just like I'd do with external APIs for server applications. 

The application does needs a development server so it's possible that that I'm hosting the application's source on the same stack as the API to simply my workflow but:

  - You might not be using the same tech as me, 
    or I might use a different technology tomorrow, and this process
    should be portable so we don't need to reproduce this for
    every backend technology.
  - I might be running a pure development build server and 
    just proxying to API data in development. It's a nice division
    of labor for larger teams. 
  - I might be combining several APIs (this is my actual
    situation at work right now) so none of them are the
    obvious candidate for being the container for testing
    the UI.
  - It feels wrong. I don't need to use a Rails server
    to test iOS client applications. Just because I can
    run both my server and the browser on a development
    machine doesn't make it sensible.

2. Tests are written in the language your write code. You'd get interesting looks if you said "Testing python? there's a great Java library for that" and yet several people proposed solutions where acceptance tests were written in languages other than JavaScript.
   The tools themselves can be written in any language but this fact should never be exposed to the developer. Why?
   - It looks like much of the best browser testing tools are written in Ruby right now.
     I know Ruby but maybe you don't. This shouldn't lock you out of testing your browser application.
   - If you're working on a team not everyone will know the same languages – except JavaScript.
   - even if everyone knows the same languages context switching is frustrating.

3. I can execute the entire suite from the command line with one command; Results should display there too.
Command line execution means I can fit testing into any wokflow very flexibly. I chose to run directly from the command line, your IDE might call this command line tool to fit into its testing hooks, continuous integration service have central point of access. Reporting should be flexible enough write to stdout or a file in a nice data format (probably json and xml to support continuous integration tools).

4. I can run it headless.
For most tests I don't need to open a real browser. If I'm mid-development I might have one open and be clicking around – this is the nature of client development, it's inherently visual and nothing matches the bandwidth of actually running the application – but if I'm just popping in to fix a small bug or working on parts of the application that don't directly touch the view layer I should be able to write a test, make it pass, and push a solution without needing to ever open a browser.

5. I can run it through real environments

That said, I should also be able to execute the test suite through a real browser, ideally several either on my development computer or through services like More importantly I can pause the execution of a test so I can explore the test case in a particular state and poke around my object structure and the DOM.
6. I can bring my own expectations, mocks, spies, etc
I.should().not().be().tied_to_a_particular enforced syntax of writing expectations.
I expect(options);
7. I can isolate specific tests from the command line interface.
I don't want to run my entire test suite while I'm developing a new feature or fixing a bug. I'll run the entire suite to verify I didn't cause regressions but running once I've made my new tests pass. Having to run the entire suite interrupts my flow.

8. I can easily test async code.
And I don't just mean data loading. I mean everything. In a browser application rendering can happen asynchronously in response data changes or user interaction so an event like "page load" are meaningless. Waiting for a particular selector to appear on the page is awful too. I don't want to tie particular class/id structure to my tests.

This means the test running likely needs hooks that event emitters or runloops can use.

It's possible that this exists and I'm just too stupid to find it. If so, please write a good how you set this process up, use it, and integrate it with other  tools and processes.

Post has attachment
"In a genuine attempt to please their customers, software engineers focus on checking all the items, one by one, off a list of required features. This approach makes sense to technology-oriented software engineers, but it results in lumbering beasts. Customers are expert in knowing what they need to accomplish, but not in knowing how software ought to be designed to support their needs. Allowing customers to design software through feature requests is the worst form of design by committee."

- Stephen Few, channeling Alan Cooper

Use your opinion on marriage equality to figure out your alignment:

Lawful Good: The Bible is against it, but it's hardly like we apply everything in that book. Secular law is against it mostly from history, but we shouldn't retain laws that unnecessarily burden one group of people just because we disagree with how the live their lives, unless there's a very good reason. Protecting children or Marriage as as an institution are good reasons, but we wrongly denied freedom to slaves and suffrage to women on similar grounds. If, after we look at it, the law is harming people then the law must go.

Lawful Neutral: The Bible is against it, and we have a long legal history of denying it. It's a bummer for a small group, but these laws exist for a reason! There are all sorts of unintended consequences of meddling with long-held beliefs. Think of the children! What about Marriage and Society!

Lawful Evil: I see no reason to give strangers "special privileges" I don't enjoy. If gays want to get married perhaps they should work harder to put themselves in a position to do so.

Neutral Good: Other places have changed these laws and the proposed negative effects never occurred. We should change these laws and stop harming people. What good are laws that harm people? This isn't the purpose of banding together into a nation.

Neutral: Well, I'm not gay so I don't care much. I support the issue mostly because my gay friends want to get married and they seem like swell folks.

Neutral Evil: I couldn't care less about what gays do. I'll oppose gay marriage as long as it helps me and support it the moment it doesn't.

Chaotic Good: You see! This is the exact tyranny of big government we should all be warned against. Passing laws about who gets to marry whom? Or worse, put it to a popular vote? What freedom will they come hunting for next?

Chaotic Neutral: Let gays marry twelve people and a dog each for all I give a fuck.

Chaotic Evil: Yes, yes. Yell yourselves hoarse over this issue. Fools! This leave me free to... well, you'll see what I have planned in due time.

"Of course marriage equality is eventual.

For millions it is more than abstract question or voting day decision. It is the constant background hum of their life.

The oppressed yearns every moment with all he is towards freedom. His heart beats out the silent tattoo of equality, the sigh of each breath whispers 'freedom', every laugh is a promise of a better tomorrow. There is no moment, no action, where he does not claw, and strive, and reach for justice.

Whatever your concern about marriage as an institution I can assure you: you simply do not care enough and you will fail."

Post has attachment

Post has attachment
Design <for> Hackers (or, if the title wasn't just nerdwashing <design for="hackers" />) is a nice survey of design topics with some contemporary web examples tossed in to make it feel relevant for the computery crowd. As an introductory book about design theory, it's decent (although I'd recommend getting The Elements of Graphic Design instead).

I was pretty geeked reading the introduction because the author's premise really aligns with some pain I've been feeling professionally: more and more people need design skills, people with a hacker mentality want a deep understanding of the tools they work with, and the author wanted to provide that targeted, in-depth focus to design topics.

Sadly, the author was just writing checks he couldn't cash.

Some topics get a lot of depth (especially fonts), others way less. The type of depth provided is highly variable. Fonts, for example, are explored in a historical context while colors and proportions get some light pop-psych treatment.

Most disappointingly only one topic (composition) gets any examples of how to apply design knowledge in a workflow while producing a product. I know a fraction more about how different colors affect cognition after reading than before, but still have no increased understanding of how to begin selecting colors or how to adjust a color palette when things look "off."

I was especially annoyed considering how malleable the web is! Download the damned sites, change their colors and layout and tell us how different design decisions could have been made and why they weren't.

Ultimately I think the author totally misunderstands why hacker-types seek deep understanding: it's not just for the thrill of knowing, it's also to be able to apply that knowledge in the making of things.

You'll probably learn as much by pairing a better book about design - which usually don't inclue web examples - with the design category on A List Apart.
Wait while more posts are being loaded