I used to hire a lot of designers while at Alfa-bank. After I left the bank, I conducted research, where 16 hiring design managers labeled 243 resumes. Research required its own UI to emulate the manager's inbox and special feed optimization to distribute responses in the right way. Please read more about the study and its results in the first part.
The Russian design community positively perceived the research and the whole idea, and people asked me if we will continue.
Given the support and positive feedback, I thought of another experiment: to let designers rate each other resumes and see if they will be interested in such a project.
We updated the initial prototype so that previous participants could access the UI that was initially meant for managers.
I've sent emails to designers that participated in the original research. To opt-in, they agreed to share their initial resume with other participants.
We also reopened the registration and got more resumes and ratings.
Hiring managers are usually busy, and some of them expressed an idea that it would be better if they could see resumes that were already pre-filtered by other users.
As good as this idea might sound, I had my doubts: a lot of designers never hired before, and this kind of rating didn't feel trustworthy to me. But, as a researcher, I had to test this hypothesis and run an experiment.
The original UI that I designed for hiring managers seemed to be more than enough to qualify as an MVP.
I waited until a hundred designers rated at least 45 resumes, each (500+ new votes from candidates). It happened in less than a week.
Actually, at the moment I started my data analysis, there were almost 2,800 votes.
First, I compared how designers rate each other against how managers did it:
Second, I interviewed 15 designers to get more insights and ideas on how to continue with the project, or should I even bother at all.
Designers wanted personal feedback and recommendations, not only stats.
Also, they mentioned that after rating 10 to 15 resumes, they felt like they knew what is going to be in the next one, and how bad the average resume looks like, and, the most important, now they realized how to adjust their own resume and make it better.
So, they learned how to spot negative patterns and what to avoid, and they wanted to rewrite their resumes.
That was a bit unexpected learning for me personally, but I was happy to find out.
The team was enthusiastic to test a few more hypotheses.
Every weekend since early February till late March 2020, we adjusted the prototype and tested new ideas.
Most of the updates never required UI changes, so there weren't many layouts. But there were a lot of task descriptions.
It was enough to come up with a list of hypotheses and features for the next month.
Please feel free to register and test it!
Statuses were necessary: managers don't have to waste their time on numb resumes.
Statuses downgrade automatically unless a candidate would not change status by following a link from a notification email.
We didn't have time to create a profile section, and this approach saved us a ton of time and kept the database clean from obsolete resumes.
Obsolete resumes are great for training, though.
Designers wanted something more than just stats. They said they wanted to get more feedback and learn from other people.
I thought of it as of hypothesis that was worth testing, so we added an input to the card. Filling in the feedback was not a required field, but people started posting a lot of comments.
This was an interesting UI challenge, and I'm happy that I came up with this simple solution without extra buttons and conditions.
The community might not be enough, and some people might want to learn from their friends or family. To utilize that opportunity, we introduced public link sharing.
Now, if a person wanted, they could share their resume via the link and gather stats not only from the community (registered users) but also from whoever would come across their text.
This feature required more changes:
Candidates and managers have different goals, so we had to create a separate feed for managers.
First of all, managers can take a look at the unfiltered feed (two of them never trusted candidates ratings, even though they wanted that feature in the first place, hmmmm. Learn from what they do, not what they say!).
Then there is a filtered feed, where every candidate had a rating > 60% (after 15 votes from other candidates).
If a manager upvoted a resume from the filtered list, then they can see an email address of a candidate and the third section with an archive of approved resumes. If they downvoted a resume, it disappears from their feed.
At some point, I decided to translate a UI into English and test the idea on a broader audience.
I was trying to keep a UI as minimal as possible, so I didn't want a user to choose a language. Most of the time, it can be automated, and that is what I wanted.
By parsing an HTTP-request, you can learn the language; If it is a Russian language, users are redirected to a special version of the landing page, and their UI is in Russian.
For all other languages, English is the only option to start.
But the real language detection happens when a user submits their resume.
I came across Google's language detection library and used it to save the resume's language.
E.g., if a user's browser detected the primary language as Russian, but their resume is written in English. We'll display English UI and filter only English resumes to upvote, because, obviously, a user is interested in the English version of their resume.
Users expected to see their ratings instantly, as soon as they apply, and I totally understand that urge to see how my resume might perform.
To address that curiosity, I experimented with machine learning algorithms and built a basic model that could predict with 67% accuracy whether the resume looks like an interesting one or not.
I had only Russian resumes at hand, and that’s why the model works on Russian texts only at this time.
I labeled resumes with positive and negative scores accordingly and trained KNN algorithm to cluster them based on words as features (basic spam filter). I tried it on new resumes that weren’t labeled by experts but labeled by other users, and it mostly worked fine.
The idea was to program the flow so that it will train itself on the new data and new user ratings and, eventually, make it available for new users instantly. Or, more likely, to send signals to hiring managers at the moment when a good resume appears.
This experiment is applied as a prototype and as an API that extracts some basic features and returns the language as new users register. It is not yet deployed as intended.
It is quite fascinating how many of these ideas can be automated and readily available as open-source libraries already. For me personally, it was fun to build prototypes in Jupyter notebook, and I am looking forward to implementing this set of features to the existing product one day. As of July 1, 2020, I am struggling to gather enough resumes in English to see if it is going to work. When you rely on texts and real data, stemming and lemmatizing words is crucial in the process, and you have to build a new method for a new language, train different models, etc.