Profile cover photo
Profile photo
William Fischer
William's posts

Post has attachment

Post has shared content

Post has attachment

Post has attachment

Post has attachment
In building Apps for the Enterprise - Workflow is Everything

Something we discounted early and has driven our product development ever since is the importance of workflow in our enterprise apps.  

In the HR space, we believe that workflow issues are driving recruitment companies to seek more of the value chain.  We initially thought it was economics but it's workflow.  As Monster, Evenbase, CareerBuilder, and others launch more recruitment services, it is not declining margins or the slowdown in hiring that is pushing this but enterprise customers who are looking for greater efficiency in the recruitment process.

Ultimately, this is why we think that instead of relying upon a hodgepodge of apps and services, that we will see them integrated into enterprise offerings.   Even a full-service ATS isn't enough, hence recent acquisitions by Oracle and SAP.  

Although we agree with Marc Andreesen on what should drive a tech co "The core idea we have, the core theory we have, is that the fundamental output of a technology company is innovation...,"
we try to focus on the issue of workflow.  Instead of adding features to one of our products, we might build another complementary one if we believe that the workflow benefit of integrating the two services trumps the enhanced usability of a single service.

This has driven our decisions in platform architecture to a service-oriented one.  The former google employee articulated the benefits of SOA well here:

But as we see a more holistic approach to building products for the enterprise becoming the norm, we're more and more committed to SOA.

Post has attachment

Post has attachment
Can Robots Replace Recruiters?

Some quick thoughts in response to a Tweet

Although companies continue to build CV matching/ranking algorithms, the general consensus amongst recruitment pros is that it's more art than science hence machines will never replace people.
Per @chrisrussell- I don't believe in "job matching" sites, the recruitment process is just too human to codify
This, like many other debates, keeps going because neither side ever really defines what "better" is.  How can one know if an algorithm is "better" than a sourcer without defining what success looks like. In this case, I would consider the following:

Possibility:  I think everyone would agree that there exists at least one instance where some recruiter somewhere has recommended an employee who wasn't right for a company. This opens the door to the possibility that the process could be improved. 

It's also possible that if a recruiter puts forward 1 candidate from a pool of 1000, they will not always pick the best.

Fuzziness:  If 5 recruiters reviewed the same 1000 candidates for a job, it's possible that they would not all agree on the same top choice.  This suggests that the human algorithm has some limitations but, more importantly, if a computer algorithm puts forward a different candidate than the human, it doesn't mean that the computer is necessarily wrong.  The rightness of a decision might only be apparent in hindsight.

Limitations:  If the candidate pool is 10,000 vs. 1,000 the likelihood of a recruiter missing a great candidate increases but the chances of a computer algorithm to finding a great candidate increases as well.  Conversely, a smaller candidate pool of "imperfect" candidates would favor a human due to the weighting challenge

Dynamic Weighting: A real challenge in building algorithms is the relative heft one assigns to the evaluated elements:  experience / intelligence / corporate fit / education / communication skills / team player / ambition / loyalty etc. etc. Whereas a recruiter can be more dynamic in making these tradeoffs (some of this might be based upon their knowledge of the  unique preferences of the hiring manager) machine-learning can help to tune the algorithms but there are real practical limits to how much can be done in this area. In part because of a delayed/imperfect feedback mechanism.

When is Success Measured and by Whom:  Typically, there is a hiring manager who initially will assess whether or not the recruiter has put forward the right candidates.  Since this is inherently idiosyncratic and since they won't have knowledge of which candidates were not considered, an adaptive human is likely to best a computer.  But, if a company looks at a longitudinal data set of candidates that were successful and ones that were not, then it's possible to start to isolate and define variables which could improve machine algorithms.  

Since searchable candidate pools are growing quickly (job aggregators, wider adoption of online job boards, online CVs, enhanced ATS searching, social media...), and if success is defined as selecting the best available candidate from this growing pool and then tracking their success over time at a company, then my hunch is that companies will shift more of their recruitment process to matching algorithms.

Post has attachment
new big data products for socialcrm

Post has attachment
We're introducing a new suite of Big Data products for the CRM market: Business Intelligence, Prospecting Automation, Predictive Automation, Lead Generation. Here's a quick overview:

Reviewing some R&D in using an additional hadoop mapreduce layer in our sales automation algorithms
Wait while more posts are being loaded