Over the past decade, hiring teams in many regions have faced rapid changes in how skills are measured, validated, and compared. Universities and companies have expanded their intake scales, remote recruitment has grown, and digital tests have become common. These changes have created large applicant pools while also raising questions about accuracy and fairness. Several studies published between 2020 and 2023 reported a growing mismatch between rƩsumƩ-based screening and job performance, along with an increase in test fraud. As organizations adopted online assessments, the need for more reliable ways to verify real capability became a central theme in the human resources sector.
Around the same time, many talent leaders noted that remote hiring posed new challenges related to identity verification, content leaks, and tool-assisted cheating. A 2022 McKinsey survey reported that the majority of companies struggled to develop reliable skill-validation processes, which often led to inconsistent evaluations. As the global technology sector expands, screening quality has become increasingly tied to business needs. Some firms began to explore artificial intelligence to support the maintenance of structure and consistency. Others sought platforms capable of conducting large-scale assessments without compromising test integrity.
It was within this environment that WeCP, short for We Create Problems, entered the market. The company was founded in Bengaluru in 2016 by Abhishek Kaushik and Mohit Goyal. Their early motivation came from observing how campus hiring cycles routinely produced questionable outcomes. According to their findings, many students relied on repeatedly available online problem sets, and hiring teams struggled to distinguish between memorized answers and genuine capability. Kaushik and Goyal recognized that universities and enterprises needed assessments that evaluated how candidates think rather than what they had memorized.
The company’s early years focused on developing technical assessments for software roles. During this phase, WeCP offered customized coding questions tailored to specific job profiles. Recruiters could identify whether a candidate could apply concepts to practical problems, especially in engineering contexts. Over time, the platform expanded its coverage and built a library that now includes more than 500,000 questions and thousands of assessment templates. These additions helped the platform reach a broader range of industries seeking to evaluate engineering, IT, QA, security, data, aptitude, and communication skills without relying on repetitive content.
The companyās development approach gradually shifted as hiring practices continued to change. By 2019, many enterprises sought systems capable of handling scale while maintaining monitoring standards and reducing operational time. WeCP expanded its proctoring capabilities to support secure online testing environments, including real-time monitoring, activity tracking, and integrity logs. These features were designed to help organizations manage high-volume assessments while addressing concerns related to impersonation, content misuse, and unsupervised testing. Around this period, the platform also added analytics tools that allowed companies to compare performance across locations or cohorts.
Artificial intelligence became a central part of the platformās architecture as the team explored new ways to support hiring operations. One example is WeCP AI, which can generate tailored assessments for specific job roles in a short amount of time. Rather than relying on manual authoring, recruiters can develop role-based tests that align with current industry requirements. This capability was introduced as a response to companies seeking to reduce preparation time while preserving content relevance and quality. WeCP AI also provides various question variations to reduce potential leaks and maintain assessment unpredictability.
Another development is the WeCP AI Interviewer, designed to support structured interviews at scale. It conducts role-aligned conversations, analyzes responses using speech and content models, and prepares summaries for later review by hiring teams. This tool was designed to help companies that manage large interview pipelines and want to maintain a consistent evaluation format. According to internal data published by the company, enterprises conducting thousands of interviews each month reported reductions in screening time when automated first-round interviews were introduced.
As remote hiring continued to expand after 2020, concerns about AI-assisted cheating, deepfake use, and proxy participation became more visible across the recruitment industry, particularly in remote interviews. In response to these emerging risks, the founders of WeCP introduced an independent interview security tool, Sherlock AI, in mid-2025. Sherlock AI is not part of the WeCP platform and is not offered as a WeCP product or service. It operates independently and focuses solely on interview integrity rather than on assessment monitoring. The system analyzes video, voice, behavioral patterns, and device activity to identify impersonation attempts, voice spoofing, deepfake usage, and AI-assisted manipulation during live interview sessions. Organizations use it as a standalone solution when additional verification and interview monitoring are required.
Alongside technical and monitoring features, WeCP expanded its coverage to include communication and behavioral evaluations. English Pro evaluates reading, writing, speaking, and listening skills, with scores aligned with global standards such as the CEFR. Culture Pro analyzes cognitive and behavioral traits using interactive tasks. These modules were developed for companies that want a single platform to map both technical and non-technical competencies. According to publicly shared client case studies, these tools have been used for large-scale campus drives, internal mobility programs, and distributed team hiring.
Several enterprises, including Infosys, LTIMindtree, UST, Brillio, and Quinnox, have used the platform in different capacities. Public case studies report shorter hiring cycles, improved alignment between job requirements and candidate skills, and reduced interviewer workload. Independent interviews with talent heads conducted in 2022 and 2023 noted that hiring at scale often requires systems that integrate assessment, reporting, analytics, and monitoring into a single workflow.
Security and compliance have also become more prominent as global hiring standards evolve. The platform employs recording systems that comply with privacy guidelines and provides audit-ready logs for enterprises that process large recruitment volumes. Since its founding in 2016, WeCP has published research, interviews, and opinion articles in industry outlets, contributing to public discussions on hiring fraud, AI-based assessment models, and large-scale screening practices.
Today, WeCP continues to operate as a talent evaluation platform built around AI-supported assessments, interviews, and monitoring tools. The company founded by Abhishek Kaushik and Mohit Goyal has moved from a technical assessment product to a system used by organizations conducting hiring across multiple roles and regions. Its trajectory reflects broader changes in how enterprises approach skill validation in an increasingly digital and remote hiring environment.
Disclaimer: This article is intended for informational purposes only. Company names referenced are based on publicly available case studies, candidate reports, and industry discussions. The scope, duration, or current status of platform usage may vary. Mention of any organization does not imply endorsement, formal partnership, or ongoing commercial engagement unless explicitly stated by the respective parties.



