🤖 Rankify by Aram AI Labs  ·  Powered by Vertex AI

High-Resolution
Talent Discovery

Traditional ATS systems suffer from keyword blindness — they match surface text and systematically discard 75% of qualified candidates. Rankify deploys parallel LLM agents on Vertex AI to extract implicit skills before embedding, delivering merit-based precision that keyword search cannot match.

Built for
Enterprise HR ATS Vendors Staffing Firms
Multi-Agentic Enrichment Pipeline
📄
Document Intake
REST API or direct upload · PDF, DOCX, image
🤖
LLM-Powered Enrichment
Parallel Vertex AI agents extract implicit skill dimensions
🧠
Augmented Pre-Vectorization
Patent-pending enriched embedding architecture
🎯
Semantic Matching
Job description ↔ enriched resume vectors
📊
Precision Ranking
AI-powered semantic reranking with fit scores
🧠 Powered by Vertex AI
🤖 Multi-Agentic Architecture
🔒 Patent Pending
⚡ Sub-Second Reranking
☁️ Google Cloud Platform
🇪🇺 SOC 2 & GDPR

The $8.5T Global Talent Gap Is Caused by Keyword Blindness

Legacy ATS systems operate on surface-level keyword matching — creating a massive data accuracy problem that costs the global economy trillions.

75%
Industry estimate · qualified applicants rejected
Keyword Filters Create Systematic Data Loss
A Python developer who wrote "server-side scripting" never passes the filter. Keyword matching rewards resume optimization, not genuine technical capability — creating a data accuracy problem that costs enterprises real revenue.
23 hrs
Average recruiter time per open role
Manual Review Does Not Scale
At enterprise volume — hundreds of applicants per position — screening quality degrades linearly. Human fatigue introduces inconsistency, and high-value candidates are deprioritized arbitrarily. This is an infrastructure problem.
60%
Of job-relevant skills — never explicitly written
Implicit Value Is Invisible to Existing Tools
Most competencies are implied by project context, outcomes, and role scope. Legacy systems cannot infer what a resume means — they only match what it says. This implicit value represents the largest untapped talent signal in enterprise hiring.

Multi-Agentic Enrichment Pipeline

We do not just parse resumes — we enrich them with LLM-powered agents on Vertex AI. Our patent-pending multi-agentic workflow dispatches specialized foundation model agents in parallel, each extracting a different implicit skill dimension: technical depth, leadership signals, domain expertise, and cross-functional capability.

The findings from all agents are aggregated into a single enriched candidate profile. That enriched profile — not the raw resume text — is embedded into semantic vector space and matched against job descriptions using AI-powered semantic reranking.

The result: merit-based talent discovery at production scale. Candidates surface based on genuine technical capability, not keyword optimization.

The technical moat: Any search system can embed raw text. Only Aram AI Labs enriches it first with parallel Vertex AI agents — and that changes everything about ranking precision.

❌ Traditional ATS
"Built Django REST API for real-time analytics platform"
Django REST API analytics
↓ Multi-Agentic Enrichment
✅ Aram AI — Enriched Profile
"Built Django REST API for real-time analytics platform"
Python Backend Architecture Database Design API Security REST Principles Data Pipelines Scalability System Design

From Document Intake to Precision Ranking in Four Steps

The entire multi-agentic pipeline — intake, LLM enrichment, embedding, and semantic reranking — runs on Vertex AI in seconds.

1
📥
Document Intake
Resumes arrive via REST API (ATS integration) or direct upload. High-accuracy parsing handles PDF, DOCX, and image formats with structured data extraction.
2
🤖
LLM-Powered Enrichment
Parallel Vertex AI foundation model agents each extract a different implicit skill dimension — technical depth, leadership signals, domain expertise, and cross-functional capability. Results are aggregated into a single enriched profile.
3
🧠
Augmented Pre-Vectorization
The enriched profile — containing both explicit and agent-discovered skills — is embedded into semantic vector space using our patent-pending augmented pre-vectorization architecture on Vertex AI.
4
Semantic Reranking
The job description is matched against all enriched embeddings. AI-powered semantic reranking surfaces candidates by genuine technical capability — ranked by true merit — in under a second.

A Complete AI-Powered Talent Intelligence Platform

Rankify is not just a search tool — it is a full-stack platform with REST API, MCP server, conversational AI agent, and developer console for building talent intelligence workflows at any scale.

🎯
AI-Powered Resume Ranking
Upload resumes in PDF or DOCX format, match against any job description, and receive AI-ranked candidates with scores 0–100, individual strengths, improvement areas, and detailed fit explanations — all powered by our multi-agentic enrichment pipeline.
Multi-format parsing Semantic scoring 0–100 AI-generated explanations
🔌
REST API & OpenAPI 3.1
Full REST API with OpenAPI 3.1 specification — 14 endpoints covering resume management, job descriptions, matching, and usage tracking. Interactive Scalar documentation with one-click testing at rankify-api.aramailabs.com/api/docs.
14 API endpoints OpenAPI 3.1 spec Interactive Scalar docs
🤖
MCP Server — 9 AI Tools
Model Context Protocol server with 9 tools for AI-native integration. Works with VS Code Copilot, Claude Desktop, Cursor, and Windsurf — any AI assistant can manage resumes, rank candidates, and search your talent pool directly.
9 MCP tools Streamable HTTP transport Multi-client support
💬
HR Agent — Conversational AI
Gemini 2.0 Flash–powered conversational AI with function-calling. Ask natural language questions like “find my best candidates for the engineering role” and the agent calls the right API tools automatically.
Gemini function-calling Natural language queries MCP tool integration
🔑
Developer Console
Self-service developer portal with API key management, real-time usage dashboard, quota tracking, and plan management. Everything an engineering team needs to integrate Rankify into their hiring workflows.
API key management Usage dashboard Quota & plan tracking
📄
PDF Export & Reports
Export AI-ranked candidate results as downloadable PDF reports with match scores, strengths, improvement areas, and skill breakdowns — ready for hiring manager review and compliance archiving.
Ranked results PDF Score breakdowns Compliance-ready

Production-Grade Serverless AI on Vertex AI

Our multi-agentic pipeline runs entirely on Google Cloud Platform — chosen for the tight integration between Vertex AI foundation models, serverless compute, and vector search that our architecture requires. Every component is production-grade.

🧠

Vertex AI — Foundation Models

The core of our multi-agentic enrichment layer. Parallel foundation model agents on Vertex AI extract implicit skill dimensions — technical depth, leadership signals, domain expertise — at production scale. This is the intelligence layer that makes precision ranking possible.

⚙️

Cloud Run — Serverless Compute

Containerized, auto-scaling pipeline on Cloud Run that handles from a single document to 1,000+ concurrent enrichment tasks. Zero infrastructure management — production-grade serverless that scales from zero to N instances with enterprise demand.

🗄️

Firestore — Real-Time Data Layer

NoSQL database with native vector search for enriched candidate profiles. Embeddings are stored directly in Firestore — no external vector database required — delivering strong consistency, low-latency reads, and sub-second semantic queries at scale.

🔍

Vector Search — Semantic Reranking

ANN-indexed semantic search across millions of enriched embeddings with sub-second query latency. Combined with AI-powered semantic reranking, this delivers precision matching that keyword search cannot approach.

🏗️ Production-Grade GCP Architecture

Every component was selected for production reliability — from Vertex AI foundation models for multi-agentic enrichment, to Cloud Run's auto-scaling serverless containers, to Firestore's native vector search for sub-second semantic reranking.

☁️ Google Cloud 🚀 Auto-Scaling ⚡ Sub-Second 🌍 Global

Production AI Architecture on Google Cloud

A deep look at the infrastructure powering Rankify — from the multi-agentic pipeline on Vertex AI to the dual-protocol API layer on Cloud Run.

Client Layer
🌐
React 19 SPA
Firebase Hosting & CDN
TypeScript · Chart.js · pdf-lib
🔌
REST API Clients
OpenAPI 3.1 spec · 14 endpoints
Interactive Scalar documentation
🤖
MCP AI Clients
VS Code Copilot · Claude Desktop
Cursor · Windsurf · Any MCP client
API & Compute Layer — Google Cloud Run
FastAPI + FastMCP Server
Dual-protocol: REST API + MCP server in a single Docker container
Python 3.11 · Serverless · 0→N auto-scaling
🔒
Authentication Layer
Firebase Auth (JWT tokens) + API Key auth
Google Identity Platform · Role-based access
AI & Intelligence Layer — Vertex AI
🧠
Gemini 2.0 Flash
LLM-powered skill enrichment · Semantic reranking
HR Agent function-calling · Embedding generation
🎯
Multi-Agent Pipeline
Parallel skill extraction agents
Enrichment → Embedding → Matching → Reranking
Data & Storage Layer
🗃️
Cloud Firestore
NoSQL + Native Vector Search
No external vector DB required
📦
Cloud Storage (GCS)
Resume & JD file storage
AES-256 encryption at rest
🌐
Cloudflare CDN
Edge caching · DDoS protection
Custom domain · SSL termination
Dual-Protocol API Design
A single Cloud Run container serves both a RESTful API (14 endpoints, OpenAPI 3.1) and an MCP server (9 tools, streamable-http transport) from the same FastAPI codebase. Traditional HTTP clients and AI assistants share the same backend — no separate services to deploy or maintain.
Serverless Auto-Scaling
Deployed as a Docker container on Google Cloud Run with automatic scaling from 0 to N instances based on traffic. Sub-3-second cold starts, pay-per-use pricing, and zero infrastructure management. Production-grade reliability with no servers to patch.
Firestore Native Vector Search
Enriched resume embeddings are stored directly in Firestore using its native vector search capability — no external vector database required. This eliminates data synchronization complexity and reduces infrastructure cost while delivering sub-second semantic queries at scale.
Single-Model AI Pipeline
Vertex AI Gemini 2.0 Flash powers all four stages of the pipeline: (1) LLM-powered skill extraction, (2) structured data enrichment, (3) embedding generation, and (4) semantic reranking. A single foundation model family ensures consistency, reduces latency, and simplifies operations.

Enterprise Data Protection

Resume data is sensitive. We treat it that way — with enterprise-grade security controls and compliance commitments built into the platform from day one.

🛡️
SOC 2 Aligned
Infrastructure and operational controls aligned to SOC 2 Type II requirements, covering security, availability, and confidentiality of candidate data.
🔒
End-to-End Encryption
All resume data is encrypted in transit (TLS 1.3) and at rest (AES-256). No plaintext candidate data is ever stored without protection.
🇪🇺
GDPR Compliant
Data subject rights, consent management, data residency controls, and deletion workflows are built in — suitable for EU and UK enterprise deployments.
📋
Audit Logging
Comprehensive, tamper-evident audit logs for every data access and enrichment operation — supporting compliance reviews, SLAs, and internal governance.

Rankify Is Live — Try It Now

Sign in with your Google account and start ranking candidates in minutes. No setup required, no credit card needed. See what multi-agentic AI on Vertex AI can do for your talent pipeline.