How Linearloop Built a Zero-Loss ERP for a Gold Refinery: Gold VGR ERP Case Study
Mayur Patel
Mar 3, 2026
6 min read
Last updated Mar 5, 2026
Table of Contents
Introduction
About Gold VGR ERP
The Core Business Objective
Tension and Project Limitations
Translating Business Expectations into System Outcomes
Technical Risk Areas
Technical Stack
Custom Architecture Designing for Traceability and Scale
Execution Approach
Results from Manual Ledger to Intelligent Systems
Stakeholder Feedback and Ongoing Partnership with Phase 2 Roadmap
Conclusion
Share
Contact Us
Introduction
When every milligram carries financial weight, manual tracking becomes a liability. VGR Gold needed more than a basic ERP. They needed a precision system that could eliminate manual gold tracking, digitise the entire refinery lifecycle, and ensure stage-wise traceability across melting, assaying, refining, casting, and recovery. The objective was to protect high-value assets through system architecture.
Linearloop built a refinery-grade ERP with a precision recovery engine at its core, designed to track residual particles, enforce controlled workflows, and provide real-time visibility across every stage. This case study breaks down how we engineered that system under tight constraints, extended platform capabilities, and delivered a zero-loss operational framework. Read on to see how we helped Gold VGR ERP move from manual processes to milligram-level control.
VGR Gold is a Dubai-based gold refinery operating in a high-precision, multi-stage refining environment. Their workflow spans raw material intake, melting, assaying, refining, casting, and final recovery, each stage directly impacting weight, purity, and financial value. The business handles high-value material daily, where even microscopic discrepancies can translate into measurable loss.
The operating environment is accuracy-critical and regulatory-sensitive. Every batch must be traceable, documented, and verifiable across stages. Process deviations are not just operational issues. Instead, they carry financial and compliance implications. VGR needed systems that matched the precision of their physical operations.
Automation was not the goal. Precision was. VGR Gold needed a system that could account for every gram moving through the refinery, eliminate blind spots between stages, and replace manual reconciliation with system-enforced accuracy. The North Star was operational control at milligram-level fidelity.
End-to-end gold lifecycle visibility: Every batch had to be traceable from raw intake to final recovery, with stage-wise status updates and complete historical records.
Zero-loss tracking: The system needed to detect and calculate residual particles at each phase, ensuring no measurable gold went unaccounted.
Real-time operational insight: Managers required live visibility into stage progress, weight changes, and recovery metrics without waiting for manual reports.
Elimination of reconciliation gaps: Manual cross-checks between departments had to be replaced with system-driven validation and automated calculations.
Controlled access and accountability: Role-based access controls were mandatory to ensure that each stakeholder could act only within their designated stage and responsibility.
Tension and Project Limitations
The system had to be delivered fast, perform flawlessly, and operate in a zero-error environment. Engineering decisions had to balance speed with precision, without compromising traceability or security.
5-week delivery timeline: The entire ERP—across multiple refinery stages—had to be architected, built, tested, and validated within five weeks.
No room for calculation error: Recovery calculations and stage-wise weight propagation required milligram-level accuracy. Even minor logic flaws could translate into financial loss.
Performance under load: The system had to maintain consistency and responsiveness while handling simultaneous stage updates, image uploads, and reporting operations.
Directus platform limitations: Several required workflows and calculation mechanisms were not natively supported, requiring custom extensions without breaking core stability.
Security expectations: Strict role-based access control was mandatory. Each stage required controlled permissions to prevent unauthorised edits or data manipulation.
Translating Business Expectations into System Outcomes
Success was defined by operational trust. VGR needed a system that removed manual dependency, enforced accuracy at every stage, and created controlled visibility across the refinery lifecycle.
Eliminate manual entries: Replace paper-based logs and spreadsheet reconciliations with structured, system-driven data capture across all refinery stages.
Accurate recovery calculations: Ensure residual gold tracking and weight computations were precise, traceable, and automatically validated at every phase.
Role-based access enforcement: Restrict user actions by stage and responsibility, preventing unauthorised edits and ensuring accountability.
Stage-wise monitoring: Provide real-time visibility into batch status, weight transitions, and process progression across the entire lifecycle.
Modular reporting flexibility: Enable dynamic, editable reports that reflect live operational data without compromising calculation integrity.
Technical Risk Areas
The complexity sat in the logic layer, where calculation errors, workflow gaps, or data inconsistencies could directly translate into financial risk. Every engineering decision had to protect traceability and precision.
Residual gold recovery calculations: The recovery engine had to compute expected vs actual weight changes at each stage and track microscopic residual particles without rounding errors or propagation drift.
Stage-dependent workflow logic: User actions and data visibility changed dynamically based on refinery stage, requiring tightly controlled transitions and validation checks.
Multi-image compliance mapping: Each batch required multiple images tied to specific stages, stored, indexed, and retrievable without degrading performance.
Data consistency across stages: Weight, purity, and batch data had to propagate accurately from intake to recovery, with no break in traceability.
Extending Directus beyond native capability: Several refinery-specific workflows were not supported out of the box, requiring custom logic layers without compromising platform stability.
The stack was selected for control, extensibility, and speed of execution. We used Directus as the operational backbone and extended it where refinery-specific logic required deeper customisation. The architecture remained lightweight but precision-focused.
Layer
Technology
Purpose
Backend & CMS
Directus
Core data management layer for refinery workflows, collections, and operational logic extensions
Frontend
Vue.js
Built stage-wise tracking interface aligned to refinery operations
Database
Directus-managed database
Centralised storage for batch data, recovery calculations, and documentation mapping
Authentication & Access
Directus RBAC + custom logic
Enforced stage-based permissions and controlled workflow transitions
Version Control
GitHub
Managed source control, deployment workflows, and iterative development cycles
Custom Architecture Designing for Traceability and Scale
The architecture was designed around one principle: trace every gram, at every stage, without breaking data integrity. Instead of layering features on top of a generic ERP, we structured the system around refinery flow, ensuring stage transitions, weight calculations, and reporting logic were deeply interconnected and scalable.
Stage-aware data propagation: Each refinery phase right from intake, to melting, assaying, refining, casting, and recovery, everything was modelled as a controlled state transition. Data propagated with validation rules. Weight, purity, and batch metadata were locked, recalculated, and versioned at each stage to prevent silent discrepancies. This ensured traceability from raw material entry to final recovery without reconciliation gaps.
Custom gold weight calculation engine: Standard ERP arithmetic was insufficient. We implemented a precision-driven calculation layer that computed expected vs actual weight differences, tracked residual particles, and maintained milligram-level accuracy. The engine validated inputs dynamically and prevented stage progression if inconsistencies were detected.
Modular ERP structure: The system was built as independent but connected modules like order management, stage tracking, recovery, and reporting. This modularity allows future extensions (accounting, compliance modules) without disrupting the core refinery logic.
Recovery module architecture: The recovery layer was designed as a standalone logic engine integrated with stage data. It recalculates residual quantities in real time and updates operational dashboards automatically, eliminating manual reconciliation.
Real-time reporting engine: Reports are dynamically generated from live stage data. Editable interfaces exist, but core calculation logic remains protected to preserve accuracy and auditability.
Mobile-friendly refinery interface: The UI mirrors physical refinery workflows. Operators can update stages on the floor without navigating complex dashboards, aligning digital execution with real-world operations.
The timeline was fixed but the scope was complex. Execution required speed without sacrificing calculation accuracy or architectural integrity. We structured delivery around tight feedback loops and disciplined iteration to maintain control under pressure.
Agile methodology: The project followed structured Agile cycles, breaking the refinery workflow into manageable build units aligned with stage transitions.
Stakeholder check-ins: Regular syncs ensured validation of recovery logic, workflow behaviour, and reporting accuracy before moving to the next iteration.
Iterative validation: Each module, especially recovery and stage tracking, was tested incrementally to detect discrepancies early.
Rapid build cycles: Development and validation ran in parallel, allowing continuous refinement within the five-week window.
Scope control within tight timeline: Priorities were clearly defined around precision and traceability, ensuring no deviation from core objectives.
The ERP replaced fragmented manual tracking with a centralised, stage-driven system. Every refinery phase now operates within a controlled workflow where data propagates with validation.
Manual logs and spreadsheet reconciliations were eliminated. Batch entries, weight transitions, and recovery calculations are system-enforced, reducing operational gaps and dependency on cross-verification.
Stage-level visibility is real time. Managers can track batch progression, weight evolution, and residual quantities instantly. The recovery module calculates residual gold at each phase and updates reports dynamically, ensuring milligram-level traceability.
The platform also creates a scalable base for Phase 2:
Accounting integration
Advanced compliance reporting
Recovery enhancements
Performance optimisation post-production
Live metrics will follow post-launch, but structurally, the shift is clear: Faster workflows, reduced waste, stronger retention accuracy, and tighter operational control.
Stakeholder Feedback and Ongoing Partnership with Phase 2 Roadmap
The leadership team validated the recovery logic, stage transitions, and reporting structure during simulation cycles. They highlighted three areas of impact:
Clarity of workflow navigation
Precision of residual calculations
Confidence in system-enforced traceability
Operational teams responded positively to the stage-aligned interface. The workflow mirrors physical refinery movement, reducing friction during adoption. Role-based controls were particularly appreciated, as they introduced structured accountability without slowing execution.
We remain actively involved post-deployment. The engagement has transitioned from build phase to optimisation and expansion. Phase 2 is already structured around controlled enhancement rather than feature expansion for its own sake.
Phase 2 roadmap includes:
Accounting module integration for financial alignment
Advanced compliance reporting capabilities
Extended recovery logic enhancements
Performance tuning based on production load
UI refinements driven by real usage feedback
Conclusion
GOLD VGR ERP demonstrates what happens when operational risk is addressed through system architecture. By replacing manual reconciliation with stage-aware workflows and milligram-level recovery logic, VGR moved from ledger-based tracking to a controlled, intelligent refinery platform. Precision became structural. Traceability became enforceable. High-value material handling is now backed by engineered accountability.
If your operations depend on accuracy, compliance, and zero-loss tracking, the solution is not another tool. it is architecture-first thinking. Linearloop builds systems designed for precision environments where mistakes are expensive. If you are ready to modernise critical workflows with the same level of control, let’s talk.
Mayur Patel
Head of Delivery
Mayur Patel, Head of Delivery at Linearloop, drives seamless project execution with a strong focus on quality, collaboration, and client outcomes. With deep experience in delivery management and operational excellence, he ensures every engagement runs smoothly and creates lasting value for customers.
BVB Media builds customised e-commerce platforms for businesses across the Netherlands. As their client portfolio expanded, the traditional delivery model began to show its limits. Every new store required a separate codebase, repeated engineering effort, and longer delivery cycles. This approach slowed down onboarding for new clients and created unnecessary complexity for the development teams managing multiple implementations.
To solve this, the focus shifted from individual projects to platform architecture. Instead of building each store independently, we worked with BVB Media to design FTLShop, a reusable e-commerce framework that could support multiple client deployments from a common foundation. The objective was straightforward: Create a scalable architecture that reduces repeated development work while allowing each client implementation to remain flexible and customisable.
The idea was to move from isolated projects to a structured framework called FTLShop. This framework would act as a reusable base architecture, enabling faster deployments while maintaining consistent performance across stores. The focus was on building a system that could scale operationally as the client base expanded.
Reusable framework architecture: FTLShop was designed as a reusable framework that could power multiple e-commerce implementations. Instead of creating new systems for every client, teams could deploy stores using a shared architectural foundation while keeping core components stable and maintainable.
Faster deployment timelines: The framework reduced repeated engineering work by standardising core platform elements. This enabled development teams to launch new stores significantly faster while maintaining consistent technical standards across projects.
Support for multiple client implementations: The architecture was built to support simultaneous deployments across many stores. Each client could run on the same framework while maintaining their own configurations and business requirements.
Flexible customisation with architectural stability: Custom features were supported through modular extensions. This allowed client-specific functionality without modifying the framework core, ensuring long-term system stability.
Building a reusable e-commerce framework meant working within several technical and operational limits. The system had to integrate with existing infrastructure while supporting future scalability.
These constraints shaped the architectural decisions and required disciplined engineering choices throughout the project:
Legacy PHP backend integration: The existing backend and admin panel were already built in PHP. The new framework had to integrate with this system without disrupting existing workflows or compromising performance, which required careful API orchestration.
Pawjs dependency conflicts: Working with Pawjs introduced dependency management challenges. Certain third-party libraries created compatibility issues, requiring custom solutions to maintain framework stability.
Distributed team coordination: Backend and design teams operated from Indonesia, while architecture planning involved teams across regions. This required structured communication and clearly defined API contracts.
Framework discipline vs client customisation: Clients required unique features, but altering the core framework could create instability. The architecture needed extension layers that allowed flexibility without breaking the shared system.
For BVB Media, success was not measured by launching a single platform. The expectation was to create an architecture that could support continuous client onboarding without repeating the same development effort. The system needed to enable faster delivery timelines while maintaining a consistent technical foundation across every implementation.
The framework also had to support scale. BVB Media wanted a solution capable of powering 50–60 e-commerce stores while keeping performance stable across deployments. At the same time, development complexity could not grow with every new client. The platform had to support expansion while keeping engineering effort predictable and manageable.
Building a reusable framework on top of an existing ecosystem introduced several engineering challenges. The system had to support modern frontend architecture while remaining compatible with legacy backend infrastructure. At the same time, it needed to remain stable across multiple store deployments.
Pawjs dependency management: Working with Pawjs required careful handling of third-party library dependencies. Several packages created compatibility conflicts, which needed custom adjustments to maintain framework stability.
Frontend–backend integration: The frontend architecture had to work with an existing PHP backend. Bridging these systems required a structured communication layer.
Custom API middleware design: A Node.js middleware layer was built to manage API interactions, ensuring reliable communication between frontend and backend components.
Multi-store stability: The framework had to maintain consistent performance across dozens of live client implementations without introducing instability.
The framework required a technology stack that could support modular frontend development while integrating smoothly with the existing backend platform.Each technology was selected to support a specific architectural requirement within the FTLShop framework.
Technology
Role in the architecture
Pawjs
Served as the primary framework for building the frontend architecture. It enabled modular component development and supported scalable e-commerce interfaces across multiple client stores.
Node.js
Used to build the middleware layer that handled communication between the frontend system and the existing backend services.
TypeScript
Provided type safety and maintainable code structure within the middleware layer, improving long-term code reliability.
Redux
Managed frontend state efficiently, ensuring predictable data flow across the application.
CSS
Used for building consistent and reusable styling components within the frontend system.
Google Analytics
Enabled behavioural tracking and performance insights across client e-commerce stores.
Google Tag Manager
Managed marketing and analytics tags without requiring code changes in the core system.
A/B Testing Tools
Allowed teams to run conversion experiments and optimise user journeys across stores.
Robin Chat
Integrated customer communication capabilities directly into the platform for live engagement.
Custom Architecture That Made Scaling Possible
The team designed a layered architecture that separated responsibilities across frontend, middleware, and backend systems. This approach allowed each layer to evolve independently while maintaining stable communication between components.
Layered system architecture
The platform followed a structured architecture consisting of four layers: Frontend interface, Node.js middleware, PHP backend services, and the underlying platform layer. Each layer had a clearly defined role, which prevented tight coupling between systems. This separation allowed teams to scale the platform without creating dependencies that could affect stability across multiple client implementations.
Modular frontend layer
The frontend architecture was designed to be modular and reusable. Components were structured so that interface elements, workflows, and state management patterns could be reused across different client stores. This ensured that new implementations could be deployed faster while maintaining a consistent user experience and maintainable code structure.
Node.js middleware for API orchestration
A custom Node.js middleware layer was developed to manage communication between the frontend and the existing PHP backend systems. The middleware acted as an API orchestration layer, handling request routing, response formatting, and integration logic. This abstraction prevented direct dependency between the frontend and backend systems, improving flexibility and maintainability.
Decoupling frontend and backend systems
The middleware layer allowed both sides of the system to evolve independently. Frontend teams could implement new features without modifying backend services, while backend teams could maintain platform logic without impacting the frontend architecture. This decoupling was essential for maintaining stability across multiple client deployments.
Reusable BVB Shop package
The architecture also introduced a custom package called BVB Shop, which centralised business logic and API handling. Instead of duplicating logic across implementations, this package provided a reusable foundation for core e-commerce operations. It ensured consistency across stores and simplified maintenance as the number of client implementations grew.
The project followed an Agile development approach with structured sprint cycles. Work was organised into incremental deliverables so the architecture could evolve while maintaining system stability. Each sprint focused on building specific framework components, validating integrations, and resolving dependency issues before moving to the next layer of development.
Responsibilities were clearly divided between the teams. Linearloop focused on the frontend architecture and the Node.js middleware layer that connected the system. BVB Media’s in-house team handled the backend services, platform logic, and infrastructure management. This separation allowed both teams to work within their areas of expertise while maintaining a consistent architectural direction.
Since the teams were distributed across different regions, structured communication became essential. Clear API contracts were defined to ensure reliable interaction between system layers. Regular technical discussions and planning sessions helped align development priorities, resolve integration issues early, and maintain coordination across the frontend, middleware, and backend environments.
The platform integrated several external tools to support analytics visibility, marketing operations, and customer engagement across multiple client stores. These integrations ensured that every implementation built on the framework could track performance, run experiments, and interact with users without requiring additional custom development.
Integration Tool
Role in the platform
Google Analytics
Implemented to capture behavioural data across stores. It provided visibility into traffic patterns, user journeys, and engagement metrics, helping teams understand how visitors interact with different e-commerce implementations.
Google Tag Manager
Enabled centralised management of tracking scripts and marketing tags. This allowed teams to update analytics configurations without modifying the core platform code.
A/B Testing Tools
Integrated to support experimentation across storefronts. Teams could test layout changes, messaging variations, and conversion flows to improve user engagement and purchasing behaviour.
Robin Chat
Added live communication functionality within the storefront. This allowed businesses to interact directly with customers, answer queries, and improve customer engagement during browsing sessions.
Results and Measurable Impact
The framework shifted BVB Media’s delivery model from isolated project builds to a scalable platform capable of supporting multiple client implementations. The architecture enabled faster deployments, reduced repeated engineering work, and provided a stable foundation for expanding the client portfolio.
50–60 e-commerce stores deployed: The FTLShop framework now powers dozens of live client implementations across the Netherlands.
Faster deployment timelines: Reusable architecture reduced repeated development effort and accelerated store launches.
Consistent architecture across implementations: All client stores run on a shared framework, ensuring technical stability and maintainability.
Simplified onboarding for new clients: New e-commerce stores can be deployed using existing framework components rather than building from scratch.
Live implementations already running: Stores including Obatala, Ledkoning, Isolatieshop, Parfumoutlet, and Stoffeholland operate on the framework.
The framework delivered the operational stability BVB Media needed to scale its e-commerce services. Stakeholders recognised the value of moving from one-off builds to a reusable architecture that could support multiple client implementations. The platform improved delivery speed, reduced repeated engineering work, and created a consistent technical foundation for managing dozens of live stores.
The collaboration also evolved into an ongoing technical partnership. The framework continues to be maintained and supported as new client implementations are added. Both teams remain aligned on improving the platform architecture, ensuring that the system continues to support BVB Media’s growing e-commerce client base.
BVB Media’s shift from individual e-commerce builds to a reusable framework fundamentally changed how they deliver digital commerce solutions. By introducing the FTLShop architecture, the team created a scalable system that supports dozens of client stores while maintaining stability and development efficiency. The modular frontend, middleware-driven integration, and reusable business logic layer together formed a foundation that allows the platform to grow without increasing engineering complexity.
Projects like this demonstrate the value of architecture-first thinking in modern digital platforms. When systems are designed for reuse and scalability, businesses can expand faster without compromising technical quality. If your organisation is facing similar scaling challenges in platform development, the Linearloop team can help you design and build the right architecture for long-term growth.
Sarthitrans is a logistics technology platform built to organize India’s highly fragmented transportation ecosystem. It connects shippers, fleet owners, independent drivers, and transporter drivers within a single mobile-first system. The platform replaces informal phone-based coordination with structured digital workflows for load posting, proposal comparison, assignment, tracking, and settlement.
The product was rebuilt from the ground up to support operational automation, multilingual accessibility (Hindi and English), and financial transparency through milestone-based wallet payments. With a centralized admin panel and role-based access control across all user types, Sarthitrans functions as both a marketplace and an operational command center for logistics management.
The North Star was operational discipline at scale: Automate logistics workflows, enforce financial transparency, and eliminate manual coordination across the ecosystem.
Digitize the complete load lifecycle from posting to settlement.
Automate 50%-50% milestone-based wallet payments.
Reduce payment disputes through structured transaction logic.
Minimize manual intervention across all user roles.
Enable centralized admin visibility and control.
Build a scalable architecture ready for Phase 2 expansion.
There was no fixed delivery deadline, but the project operated within real-world deployment constraints that directly influenced sequencing and architecture decisions. App Store and Play Store approvals required compliance alignment and release stability. Razorpay onboarding involved KYC verification and payment authorization validation before milestone automation could go live.
Additional friction came from WhatsApp Business API approvals and TDS consent workflow compliance, which required careful legal and technical alignment. These constraints shaped the rollout plan, testing cycles, and production-readiness strategy for Phase 1.
The highest risk areas were financial logic, permission architecture, and backend orchestration. These components required precision, auditability, and production stability from day one.
Wallet lifecycle management: Designed a controlled transaction engine to handle wallet credits, milestone holds, releases, commission deductions, and final settlements without race conditions or duplicate states.
Milestone-triggered Razorpay automation: Implemented conditional payment capture and release logic tied strictly to load completion events, ensuring 50%-50% structured disbursement without manual overrides.
Granular RBAC across four user roles: Built dynamic permission matrices in Directus to enforce strict access separation between Customers, Transporters, Drivers, and Transporter Drivers.
Commission calculation engine (7% configurable): Developed a rule-based commission layer with adjustable logic, ensuring accurate deductions before payout settlement.
TDS consent and compliance workflow: Integrated consent capture and verification checkpoints to align transporter and driver payments with regulatory expectations.
Real-time WebSocket updates: Enabled live status changes across load lifecycle events to prevent state mismatches between mobile clients and admin systems.
Audit trail logging: Implemented immutable event tracking across financial and operational actions to ensure traceability and dispute resolution support.
Deep Directus customization: Extended Directus beyond default CMS usage into a full backend infrastructure layer, including relational modeling, API orchestration, and permission templating.
The architecture was designed to reduce duplication, enforce financial discipline, and centralize operational control while keeping Phase 1 stable and scalable.
Single Flutter app with dynamic role adaptation: Built one unified mobile application serving Customers, Transporters, Individual Drivers, and Transporter Drivers through conditional UI rendering and permission-based access control, eliminating the need for multiple codebases.
Directus as backend infrastructure: Extended Directus into a full backend engine handling relational data modeling, RBAC enforcement, API exposure, workflow automation, and admin control within a single structured system.
Wallet-based milestone automation engine: Engineered a controlled financial workflow where 50% advance and 50% post-completion payments were programmatically triggered based on verified lifecycle events, reducing disputes and manual settlement.
Load lifecycle modeling framework: Designed structured state transitions from load posting to proposal comparison, assignment, execution, and closure, ensuring consistent status propagation across mobile and admin interfaces.
50 km intelligent load-driver matching: Implemented proximity-based filtering logic using Google Maps SDK to surface relevant load opportunities within a defined geographic radius, improving assignment efficiency.
Commission configuration layer: Developed a configurable deduction engine allowing dynamic commission adjustments without altering core payment logic.
Centralized admin command center: Built a unified dashboard integrating proposal comparison, KPI monitoring, role management, audit logs, and revenue visibility to provide full operational oversight.
Multi-channel notification orchestration: Coordinated push notifications, SMS alerts, and event triggers to maintain state synchronization across all stakeholders.
The engagement began with a focused discovery phase to define load workflows, financial logic, and role-based access boundaries. This was followed by a Figma-driven design sprint to validate user flows across four distinct roles before development began. Once architecture and UX were aligned, the team moved into structured Agile sprints, prioritizing core load lifecycle management and milestone-based wallet automation for a stable Phase 1 release.
Development ran in iterative cycles with continuous client feedback, ensuring business logic and operational workflows stayed aligned. GitHub Actions powered CI/CD pipelines for controlled deployments, while staging validations were conducted before production pushes. The execution strategy emphasized stability first, feature expansion later — ensuring a disciplined MVP foundation ready for scale.
The platform required secure, production-grade integrations to support payments, authentication, notifications, mapping, and storage without adding architectural fragility.
Integration
Platform
Implementation Purpose
Payment Gateway
Razorpay
Milestone-based wallet payment processing with controlled capture and settlement logic
OTP Authentication
Twilio
Secure user onboarding and mobile number verification via SMS
OTP Authentication
Novu
Event-driven multi-channel notification orchestration across user roles
Mapping & Location
Google Maps SDK
50 km proximity-based load matching and route visibility
Cloud Storage
AWS S3
Secure document and file storage for compliance and operational records
Results and Measurable Impact
Phase 1 delivered a production-ready logistics operating system that replaced manual coordination with structured automation and financial control.
Automated load lifecycle management: End-to-end digital workflow from load posting to completion eliminated informal coordination and reduced dependency on manual follow-ups.
Structured milestone-based payments: 50%-50% wallet automation reduced ambiguity in settlements and introduced disciplined financial tracking across all stakeholders.
Reduced payment disputes: Programmatic commission deductions and audit-logged transactions minimized reconciliation conflicts between shippers, transporters, and drivers.
Centralized operational visibility: Admin command center enabled real-time monitoring of loads, payments, user onboarding, and revenue flows.
Stakeholders validated Phase 1 based on stability, financial transparency, and operational clarity. The structured wallet system executed milestone payments reliably, onboarding workflows functioned without manual bottlenecks, and the centralized admin dashboard provided real-time visibility across loads and settlements. The bilingual interface improved usability in field conditions, and core happy flows were delivered without structural compromises.
The system was formally handed over for active use, but engagement did not end at deployment. We remain involved in support, monitoring, and Phase 2 planning, including in-app chat, predictive analytics, dynamic pricing, and digital contract integration, ensuring the platform evolves without architectural rework.
Sarthitrans demonstrates what happens when logistics operations are engineered with financial discipline and architectural clarity from the ground up. By combining structured load lifecycle modeling, milestone-based wallet automation, granular RBAC, and centralized admin control, the platform replaced fragmented manual processes with a scalable digital system built for real-world Indian conditions.
If you are modernizing a logistics network, financial workflow, or multi-role marketplace and need architecture that scales without rework, we can help you design it correctly from Phase 1. Reach out to Linearloop to discuss how we can structure your system for stability, transparency, and long-term growth.
Instream is an established CRM platform built to support sales teams in managing relationships, tracking communication, and organizing pipeline activities within a single structured system. The product was already live, actively used by businesses, and embedded into daily sales workflows, which meant any changes had to be implemented without disrupting ongoing operations.
The company is led by CEO Filip Duszczak, who served as the primary stakeholder throughout the engagement. With an existing user base and a functioning product in production, the focus was on strengthening, stabilizing, and modernizing a system that sales teams already relied on for core business processes.
The project operated under strict constraints, where system stability was non-negotiable, and every change had to be introduced without disrupting active users. The complexity came from the need to modernize a live production environment safely.
No downtime allowed: The CRM was actively used by sales teams, so deployment strategies had to ensure uninterrupted access and zero service disruption during refactoring and infrastructure changes.
No regression in legacy workflows: Existing features and user behaviors had to remain fully functional, requiring deep system analysis and careful validation before introducing improvements.
No fixed deadline: Without a hard cutoff, disciplined milestone planning and prioritization were essential to prevent scope drift while maintaining delivery momentum.
Kubernetes complexity: Orchestrating deployment on Kubernetes introduced infrastructure-level challenges, including configuration management, environment consistency, and rollout safety.
Live data migration risk: Transitioning infrastructure while preserving production data required secure backups, controlled migration steps, and strict validation to eliminate the risk of data loss.
Success for Instream was defined by stability first, not feature expansion. The CRM was already in active use, so the primary expectation was that modernization efforts would not introduce regressions, performance degradation, or workflow interruptions. Preserving system reliability while improving it was the baseline requirement.
Seamless resolution of email and calendar integration issues was equally critical. Communication workflows had to function without friction across Gmail, Outlook, and calendar systems, ensuring sales teams could operate without technical barriers.
Improved usability also mattered. Navigation, responsiveness, and backend optimizations are needed to translate into a smoother day-to-day experience for users. Finally, infrastructure readiness was a strategic goal. Moving toward Kubernetes-based deployment and modern DevOps practices required strengthening scalability and resilience while ensuring zero disruption during the transition.
Modernizing a live CRM introduced technical risk at multiple layers, particularly because the system was already in production and actively used by customers. The highest risk areas were embedded in legacy architecture, infrastructure transitions, and deployment orchestration decisions.
Deep legacy Django codebase: The application had evolved over time, which meant understanding historical design decisions, undocumented logic, and tightly coupled modules before making any structural improvements.
Hidden dependencies: Implicit connections between services, background jobs, and integrations increased the risk of unintended side effects during refactoring and feature upgrades.
Data migration: Production data had to remain intact during infrastructure changes, requiring validated backups, staged transitions, and rollback preparedness.
Server transition to DigitalOcean: Moving infrastructure environments introduced configuration, networking, and environment consistency risks.
Kubernetes orchestration risks: Container orchestration requires precise configuration to prevent deployment failures, scaling issues, or service instability.
The CRM was built on a mature and production-tested stack that supported both transactional workflows and background processing requirements. Rather than introducing a completely new architecture, the focus was on strengthening and optimizing the existing foundation while aligning it with modern deployment standards.
Each layer of the stack played a defined role in maintaining performance, scalability, and operational control during modernization and infrastructure transition. The backend handled business logic and task orchestration, the frontend supported structured user interactions, and the infrastructure layer ensured controlled deployment and scalability under Kubernetes.
Layer
Technology
Backend
Python
Web Framework
Django
Task Queue
Celery
Frontend
Angular
Database
PostgreSQL
Caching
Redis
Container Orchestration
Kubernetes (K8s)
Cloud Infrastructure
DigitalOcean
Architecture and Engineering Approach
The engineering approach prioritized controlled evolution over disruptive change. A careful refactoring strategy was implemented to understand legacy Django components before modifying them, ensuring that business logic and existing workflows remained intact. Improvements were introduced incrementally, with targeted code optimizations rather than large-scale rewrites, reducing regression risk and preserving system stability throughout the modernization process.
On the infrastructure side, DevOps pipelines were structured to enable predictable, repeatable deployments under Kubernetes. The system was modernized at the orchestration and hosting layers without an architectural overhaul, minimizing operational risk while improving scalability and deployment control. This approach strengthened reliability and future readiness without destabilizing the live production environment.
The project followed a structured Agile model to manage modernization within a live production environment, where controlled iteration was more critical than speed. Execution focused on disciplined delivery, continuous validation, and cross-functional coordination to prevent disruption while introducing improvements.
Agile methodology: Work was organized into structured cycles with defined scopes, enabling continuous progress while validating stability at each stage.
Iterative enhancements: Features, refactors, and infrastructure updates were introduced incrementally to reduce regression risk and maintain system continuity.
Milestone prioritization without deadline pressure: In the absence of a fixed deadline, progress was driven by clearly defined milestones to prevent scope drift and maintain engineering focus.
Collaboration between engineering and DevOps: Backend, frontend, and infrastructure efforts were closely aligned to ensure deployment readiness, environment consistency, and operational stability.
Communication workflows are central to any CRM, which made third-party integrations a functional priority rather than an optional enhancement. The objective was to stabilize existing connections, eliminate friction in daily sales activities, and introduce collaboration capabilities directly within the CRM interface.
Gmail integration fixes: Resolved synchronization and communication inconsistencies to ensure reliable email tracking and outbound messaging from within the CRM.
Google Calendar integration: Enabled structured scheduling visibility and event synchronization to support coordinated sales activities.
Outlook connectivity: Strengthened compatibility for users operating within Microsoft ecosystems, ensuring consistent email workflow support.
Google Meet integration: Embedded meeting capabilities directly into the CRM to streamline sales communication and reduce context switching.
Workflow improvements for CRM users: Reduced manual steps and integration friction, allowing sales teams to manage communication without leaving the platform.
The impact of the engagement was measured through system resilience, operational control, and user workflow stability rather than vanity metrics. The CRM transitioned from a legacy-bound system to a modernized, deployment-ready platform while remaining fully operational throughout the process.
Backend optimization: Refactoring and performance tuning reduced inefficiencies in background processing and improved overall application responsiveness.
Improved system stability: Legacy risks were mitigated through controlled changes, resulting in stronger reliability across core CRM workflows.
Infrastructure scalability readiness: Kubernetes-based orchestration positioned the platform for horizontal scaling and future growth without structural redesign.
Client feedback focused on execution discipline and stability. The legacy system was upgraded without disruption, and the refactoring approach preserved existing workflows while improving reliability. Confidence increased not because features were added, but because the system remained stable throughout modernization.
The Kubernetes migration strengthened infrastructure readiness and deployment control, positioning the CRM for scalable growth. With the final launch phase pending, work continues on enhanced reporting and communication features. Linearloop remains engaged as a long-term partner, supporting the platform’s evolution while safeguarding operational stability.
Modernizing a live legacy system requires more than technical upgrades; it demands disciplined engineering, controlled execution, and zero tolerance for disruption. This engagement strengthened Instream’s CRM at the code, infrastructure, and integration layers while keeping production stable and users uninterrupted.
If your platform is running on legacy architecture but cannot afford downtime or risk, the right modernization strategy makes the difference. Reach out to Linearloop to build a scalable, production-safe transformation roadmap tailored to your system.