Today was a rather short day at work. Glassdoor recently named Data Science the top job for work life balance, and I meant to prove that true today by leaving early. My wife was traveling for work, so I needed to do her job as kid chauffer and take my daughter to soccer. Up until last year I took her to soccer every week – I was the head coach of her team. Last February, however, I lost that job (my most coveted position at the time). I started at LinkedIn and my days became slightly longer due to an added commute. More importantly, my daughter outgrew my abilities as a coach and wanted to join a travel team with a professional coach. Hopefully my 4 years as soccer coach provided some benefit to the girls. As a kid, I learned valuable life lessons from Mr. Noonan (Little League coach) and the other volunteer coaches. Mr. Noonan taught me that life isn’t fair – his kid got to pitch because he was the coach’s son, and I got to attempt to catch his wild pitches.
While at work, I did quite a bit of work onboarding our new team members (a new permanent team member, and two visitors from the State Department’s TechWomen program). Finding just the right balance of intervention as a manager is quite challenging. On the one hand, there is an overwhelming amount of stuff that needs to be learned when you start as a software developer at a new company. Without guidance, a new person can get stuck and frustrated. On the other hand, learning how to figure out this stuff on your own is the most important thing one can learn in their first weeks on a new job. There is documentation for most stuff – the location is often not obvious and the quality can be mixed. More importantly, the most important bit of knowledge that one can acquire is not how to do something, but who to ask when it doesn’t work.
I am also fairly new at LinkedIn, and today I took a course on how to build a new software library or service at LinkedIn using the newest set of tools. One thing that you get at a larger company that you don’t get at a startup is a bunch of developer tools to make your life easier. At my previous startup, every developer had to bootstrap their own development environment, and I ended up doing a bunch of work writing Chef scripts to automate the provisioning of servers and deployment of packages and software. I am not very good at that stuff, and it showed. At LinkedIn, we have a world class tools team that does this stuff for us. Some of the tools are custom to LinkedIn, so there is a learning curve, but the increase in productivity we get by having dedicated experts working on development tools is large.
The highlight of my day was definitely when I looked at the results of our newest A/B test for the Jobs You May Be Interested In relevance model. At LinkedIn, we do nothing unless it is part of an A/B test. In our most common implementation, we divide our members into test and baseline samples. We have an automated framework that compares site wide metrics for the two samples. We are literally running hundreds of A/B tests at a given time, but since the assigned cohorts for each test are independent of one another the results of A/B tests generally don’t interfere. The beauty of the LinkedIn testing framework is that it calculates all site wide metrics for all active tests and tells you if something is amiss in an unrelated product. For instance, we could tweak the relevance model for jobs by adding a bunch of complicated algorithms. It would be conceivable that these changes would make the clickthrough rate for jobs higher (good), but because the algorithm is more complicated it could have overall negative consequences for other services that need to wait for our calculations (e.g. the second pass ranker that does the home page feed). Our system is not foolproof, but is vastly superior to what I have worked with in the past. Negative externalities are often not spotted until much later, and are hard to track down.
Anyways, we had a quarter long project to add a new “latent preference model” to the jobs recommendations, and today we found that the model seems to be giving a positive lift for our members. A positive lift for our members, in our case, translates to people finding better jobs that they apply to and often get. All of these complicated build systems and statistical testing frameworks help our members to have fatter paychecks and more rewarding careers. That’s why I get a rush when I see positive A/B test results.
Questions? Share your thoughts!