Machine Learning Operations (MLOps) is crucial in bridging the gap between machine learning (ML) and IT operations. Its focus is streamlining machine learning models' deployment, monitoring, and management in production environments. As organizations increasingly integrate machine learning into their workflows, the role of MLOps professionals becomes essential, and evaluating MLOps candidates is a pre-requisite to hiring the best talent possible.
In this DevMatch assessment, the objective is to evaluate candidates' skills in improving the performance of a Python web service that generates textual descriptions for uploaded images using the CLIP model from Hugging Face's Transformers library. As with any other DevMatch assessment, the candidate will start from an existing codebase and have one hour to complete the requested tasks.
The core challenge presented in this assessment revolves around an existing REST API that takes an image and returns a description of the image in JSON format. The challenge is that many of the images submitted are identical, leading to wasted computing time. The task is to implement a caching mechanism that stores image processing results in an in-memory cache. This cache will serve as a quick retrieval system, allowing the API to return pre-computed responses for previously processed images. Additionally, the candidate must create a new endpoint, `GET /cache-stats`, which provides insights into the cache's usage statistics.
Candidates are provided with a Python web application hosted in a Git repository. The application leverages the CLIP model to generate textual descriptions for images by comparing features. The provided starting code is fully functional and utilizes the Hugging Face transformers library for loading the pre-trained CLIP model. Image classification is then performed by comparing uploaded image features to text features corresponding to different image classes in the CIFAR100 dataset.
In the DevMatch arena, candidates have a VSCODE environment available to solve the assessment, or they can use their own machine. First, cloning the repository (which already demonstrates familiarity with source control), then following the instructions on the readme file with exact examples of how to run and test the API. Here we se how a candidate runs the service and uses `curl` to call the API and get a JSON response directly from the web browser.
This assessment evaluates candidates on the following key skills:
In traditional algorithmic interviews, candidates are often assessed on their ability to solve abstract problems. However, real-life scenarios, such as the one presented in this assessment, offer several advantages:
This assessment, can be used in a final round of interviews to discuss the candidates design choices and dig deep into their thought process. Hiring managers will be able to see the candidates code and have access to the repository where they worked.
In conclusion, incorporating real-life scenarios in technical assessments provides a more comprehensive and insightful evaluation of candidates' capabilities, offering a glimpse into their potential contributions to the team and organization. As the industry evolves, embracing such practical assessments becomes increasingly vital in identifying talent that can thrive in dynamic and challenging environments.
Need to see it to believe it? As with any public assessment, you can open this challenge right now: https://app.devmatch.io/arena/problem/103.
Get DevMatch updates straight to your inbox. Coding competitions, jobs and feature releases. Once a month.