How to Use Heatmaps, Data, and Hypotheses to Continuously Improve Conversions
Mayank Patel
Sep 15, 2025
5 min read
Last updated Sep 15, 2025
Table of Contents
Map Behavior in Action (Just Observation)
Enrich Signals
Framing Structured, Measurable Tests
Experimentation (Without Pitfalls)
Continuous Improvement and Integration
Share
Contact Us
Every ecommerce team wants higher conversions. But too often, optimization eff orts rely on guesswork—redesigning a button here, changing copy there—without clear evidence of what actually moves the needle. This results in sporadic wins at best and wasted effort at worst.
The most effective CRO programs don’t chase random ideas; they follow a system. They start with careful observation of user behavior, enrich those signals with analytics and feedback, form structured hypotheses, and run disciplined experiments.
Heatmaps, session recordings, form analytics, and funnel data are powerful tools, but they’re only valuable when used to generate focused hypotheses you can test. This guide walks you through that end-to-end process: how to map user behavior, enrich signals, frame testable ideas, experiment without pitfalls, and scale what works.
(1) Map Behavior in Action (Just Observation)
The first step is observation. Mapping what users actually do on your site. This involves using tools to visualize and record user interactions. By capturing how visitors navigate pages, where they click, how far they scroll, and where they get stuck, you build a factual baseline of current UX performance. Key behavior-mapping tools include:
Heatmap
Visual overlays that show where users click, move their cursor, or spend time on a page. Hot colors indicate areas of high attention or clicks, while cool colors show low engagement. If a CTA or important link is in a “cool” zone with few clicks, it might be poorly placed or not visible enough.
Scroll Maps
A specialized heatmap showing how far down users scroll on a page. This reveals what proportion of visitors see each section of content. In practice, user attention drops sharply below the fold. If a scroll map shows that only 20% of users reach a critical product detail or signup form, that content is effectively unseen by the majority. This observation signals a potential layout or content hierarchy issue.
Session Replays (Session Recordings)
These capture real user sessions as videos. You can watch how visitors browse, where they hesitate, and what causes them to leave. Session replays are like a “virtual usability lab” at scale, for example, a user repeatedly clicking an image that isn’t clickable (a sign of confusion), or moving their mouse erratically before abandoning the cart (a sign of frustration).
By reviewing recordings, patterns emerge (e.g. many users rage-clicking a certain element or repeatedly hovering over an unclear icon). Establish a consistent process for reviewing replays (for instance, log “raw findings” in a spreadsheet with notes on each observed issue) so that subjective interpretation is minimized and recurring issues can be quantified.
Form Analytics
Specialized tracking of form interactions (e.g. checkout or sign-up forms). Form analytics show where users drop off in a multi-step form, which fi elds cause errors or timeouts, and how long it takes to complete fi elds. For example, if many users abandon the “Shipping Address” step or take too long on “Credit Card Number,” those fi elds might be causing friction.
(2) Enrich Signals
After gathering behavioral data, the next step is enriching that data. This is where we transition from what users are doing (quantitative data) to why they’re doing it (qualitative context).
Key enrichment methods include:
Web Analytics & Funnel Data
Quantitative analytics (from tools like Google Analytics or similar) help size the impact of observed behaviors. They answer questions like: How many users experience this issue? Where in the funnel do most users drop off ? For example, a heatmap might show a few clicks on a “Add to Cart” button but analytics can tell us that the page’s conversion rate is only 2%, and perhaps that 80% of users drop off before even seeing the button.
Analytics can also correlate behavior with outcomes: e.g. “Users who used the search bar converted 2X more often.” These metrics highlight which observed patterns are truly hurting performance. They also help prioritize a problem affecting 50% of visitors (e.g. a homepage issue) is more urgent than one affecting 5%.
Segmentation
Breaking down data by visitor segments (device, traffic source, new vs. returning customers, geography, etc.) enriches the signals by showing who is affected. Often, averages hide divergent behaviors. For instance, segmentation might reveal that the conversion rate on desktop is 3.2% but on mobile only 1.8%, implying mobile users face more friction (common causes: smaller screens, slower load times, less convenient input).
Or perhaps new visitors click certain homepage elements far more than returning users do. By segmenting heatmaps or funnels, patterns emerge. For example, mobile visitors might scroll less and miss content due to screen length, or international users might struggle with a location-specific element. These insights guide more targeted hypotheses (maybe the issue is primarily on mobile, so test a mobile-specific change).
On-Site Surveys and Voice-of-Customer (VoC)
Sometimes the best way to learn “why” users behaved a certain way is to ask them. Targeted surveys and feedback polls can be deployed at strategic points. For example, an exit-intent survey when a user drops out of checkout (“What prevented you from completing your purchase today?”). Or an on-page poll after a user scrolls through a product page without adding to cart (“Did you find the information you were looking for?”).
Survey responses often highlight frictions or doubts. For example, “The shipping cost was shown too late” or “I couldn’t find reviews.” These qualitative signals explain the observed behavior (e.g. “why did 60% abandon on shipping step?”). Even language from customers can be valuable; if multiple users say “the form is too long,” that’s a clear direction for hypothesis. User reviews and customer service inquiries are another VoC source.
Heuristic UX Evaluation
In addition to direct user feedback, an expert UX/CRO audit can enrich signals by identifying known usability issues that might explain user behavior. For example, if session replays show users repeatedly clicking an image, a UX heuristic would note that the image isn’t clickable but looks like it should be (violating the principle of affordance). While this is more expert-driven than data-driven, it helps generate potential causes for the observed friction which can then be tested.
The result of signal enrichment is a more complete problem diagnosis. We combine the quantitative (“how many, how often, where”) with the qualitative (“why, in what way, what’s the user sentiment”) to turn raw observations into actionable insights. Quant data may tell us that users are struggling, but it doesn’t tell us what specific problems they encountered or how to fi x them, for that, qualitative insights are needed.
Likewise, qualitative anecdotes alone can be misleading if not quantified. Thus, a core LinearCommerce strategy is to triangulate data. Every hypothesis should ideally be backed by multiple evidence sources (e.x. “Analytics show a 70% drop-off on Step 2 and session recordings show confusion and survey feedback cites ‘form is too long’”). When multiple signals point to the same issue, you’ve found a high-confidence target for optimization.
With a clear problem insight in hand, we move to forming an hypothesis. A hypothesis is a testable proposition for how changing something on the site will affect user behavior and metrics. Crafting a strong hypothesis helps you run experiments that are grounded in rationale, focused on a single change, and tied to measurable outcomes.
In the LinearCommerce framework, a good hypothesis has several key characteristics:
Rooted in Observation & Data
The hypothesis must directly address the observed problem with a cause-and-effect idea. We don’t test random ideas or “flashy” redesigns in isolation. We propose a change because of specific evidence.
For example: “Because heatmaps show the CTA is barely seen by users (only 20% scroll far enough) and many users abandon mid-page, we believe that moving the CTA higher on the page will increase click-through to the next step.” This draws a clear line from observation to proposed solution.
Specific Change (the “Lever”)
Defi ne exactly what you will change and where. Vague hypotheses (“improve the checkout experience”) are not actionable. Instead: “Adding a progress indicator at the top of the checkout page” or “Changing the ‘Buy Now’ button color from green to orange on the product page” are concrete changes.
Being specific is important both for designing the test and for interpreting results. Each hypothesis should generally test one primary change at a time, so that a positive or negative result can be attributed to that change. (Multivariate tests are an advanced method to test multiple changes simultaneously, but even then each factor is explicitly defined.)
Predicted Impact and Metrics
A hypothesis should state the expected outcome in terms of user behavior and the metric you’ll use to measure it. In other words, what KPI will move if the hypothesis is correct? For example: “…will result in an increase in checkout completion rate” or “…will reduce form error submissions by 20%”. It’s important for you to pick a primary metric aligned with your overall goal.
If your goal is more purchases, the primary metric might be conversion rate or revenue per visitor; not just clicks or time on page, which are secondary. Defining the metric in the hypothesis keeps the team focused on what success looks like.
Pitfall to avoid: choosing a metric that doesn’t truly reflect business value (e.g. click rate on a button might go up, but if it doesn’t lead to more sales, was it a meaningful improvement?). Teams must agree on what they are optimizing for and use a metric that predicts long-term value. For instance, optimizing for short-term clicks at the expense of user frustration is not a win.
“Because we see (data/insight A), we believe that changing (element B) will result in (desired effect C), which we will measure by (metric D).”
For example: “Because 18% of users abandon at the shipping form (data), we believe that simplifying the checkout to one page (change) will increase completion rate (effect), as measured by checkout conversion% (metric).”
After writing hypotheses, prioritize them.
You’ll generate many hypothesis ideas (often added to a backlog or experimentation roadmap). Not all can be tested at once, so rank them by factors like impact (how much improvement you expect, how many users affected), confidence (how strong the evidence is), and eff ort (development and design complexity).
A popular prioritization framework is ICE: Impact, Confidence, Ease. For instance, a hypothesis addressing a major dropout point with strong supporting data and a simple UI tweak would score high (and likely be tested before a hypothesis about a minor cosmetic change) rather than falling for the HIPPO effect (Highest Paid Person’s Opinion) or pet projects without data.
4. Experimentation (Without Pitfalls)
With hypotheses defined, we proceed to experimentation, where we run controlled tests to validate (or refute) our hypotheses. A disciplined experimentation process is crucial: it’s how we separate ideas that actually improve conversion from those that don’t. Below are best practices for running experiments, as well as common pitfalls to avoid.
Choose the Right Test Method
The most common approach is an A/B test; splitting traffic between Version A (control, the current experience) and Version B (variant with the change) to measure differences in user behavior. A/B tests are powerful because they isolate the effect of the change by randomizing users into groups.
For more complex scenarios, you might use A/B/n (multiple variants) or multivariate tests (testing combinations of multiple changes simultaneously), but these require larger traffic to reach significance. If traffic is limited, sequential testing (rolling out a change and comparing before/after, carefully accounting for seasonality) could be considered, though it’s less rigorous.
In any case, the experiment design should align with the hypothesis: test on the specified audience (e.g. mobile users if hypothesis was mobile-focused), run for the planned duration, and make sure you’re capturing the defined metrics (set up event tracking or goals if needed).
Run Tests to Statistically Valid Conclusions
Perhaps the biggest testing pitfalls are statistical in nature. It’s essential to let the test run long enough to gather sufficient sample size and reach statistical significance for your primary metric. Ending a test too early, for example, stopping as soon as you see a positive uptick can lead to false positives (noise being mistaken for a real win). This is known as the “peeking” problem.
To avoid this, determine in advance the needed sample or test duration based on baseline conversion rates and the minimal detectable lift you care about. Use statistical calculators or tools that enforce significance thresholds. Remember that randomness is always at play; a standard threshold is 95% confidence to call a winner.
Ensure Data Quality
Before trusting the outcome, verify the experiment was implemented correctly. Check for SRM (Sample Ratio Mismatch) if you intended a 50/50 traffic split but one variant got significantly more/less traffic, that’s a red flag that something is technically wrong (e.g. bucketing issue or flicker causing users to drop out).
Also monitor for tracking errors. If conversion events didn’t fi re correctly, the results could be invalid. It’s wise to run an A/A test on your platform occasionally or use built-in checks to ensure the system isn’t skewing data. Quality checks include looking at engagement metrics in each group (they should be similar if only one change was made) and ensuring no external factors (marketing campaigns, outages) coincided only with one variant.
Robust experimentation culture invests in detecting these issues, for example, capping extremely large outlier purchases that can skew revenue metrics or filtering bot traffic (which can be surprisingly high). Garbage in, garbage out; a CRO test is only as good as the integrity of its data.
Analyze Results Holistically
When the test period ends (or you’ve hit the required sample size), analyze the outcome with an open and scientific mind. Did the variant achieve the expected lift on the primary metric? How about secondary metrics or any guardrail metrics (e.g. it increased conversion but did it impact average order value or customer satisfaction)? It’s possible a change “wins” for the primary KPI but has unintended side effects (for example, a UX change increases sign-ups but also spikes customer support tickets).
Always segment the results as well. A variant might perform differently for different segments. Perhaps the new design improved conversions for new users but had no effect on returning users. Or it helped mobile but not desktop. These nuances can generate new hypotheses or tell you to deploy a change only for a certain segment. Avoid confirmation bias: don’t only look for data that confirms your hypothesis; also ask “what does the evidence truly say?” If the test showed no significant change, that's learning too.
A quick summary:
A “winner” that improves conversion by 0.2% may not be practically meaningful or could be noise. Focus on changes that move the needle in a practically significant way.
Watch out for uneven traffic splits, tracking errors, or external events affecting tests. An invalid test can mislead you with bogus results.
Make sure you measure success by metrics that align with long-term business goals (e.g. revenue, conversion to paid customer) rather than vanity metrics. Agree on your OEC upfront.
If you run multiple tests on the same audience concurrently, be careful of interaction effects. For instance, two tests on the checkout at once could influence each other’s outcomes. Stagger or isolate test audiences if possible to maintain clarity.
One A/B test on one site section gives you evidence for that context. Don’t overgeneralize (“this layout always wins”) without considering context. Re-test major changes if the context or audience changes (season, traffic source, etc.) to ensure the finding holds.
Sometimes a “failed” test can be tweaked and re-run. Treat experimentation as iterative. Maybe the first design wasn’t quite right, but a revised version could work. The key is to use the data to refi ne your understanding.
5. Continuous Improvement and Integration
A single A/B test can yield a nice lift; a CRO system yields compound gains over time by constantly learning and iterating. This stage involves institutionalizing the practices from the first four steps, managing a pipeline of experiments, feeding lessons back into the strategy, and ensuring your CRO eff orts mesh with the broader e-commerce stack.
Build a Continuous Feedback Loop
Think of the CRO process as a loop: Observe → Hypothesize → Experiment → Learn → (back to) Observe…. After an experiment concludes, you gather learnings which often lead to new observations or questions. For example, a test result might reveal a new user behavior to investigate (“Variant B won, suggesting users prefer the simpler form but we noticed mobile users still lagged, let’s observe their sessions more”).
Successful optimization programs embrace this loop. After implementing a winning change, immediately consider what the next step is, perhaps that win opens up another bottleneck to address. Conversely, if a test was inconclusive, dig into qualitative insights to guide the next hypothesis. By closing the loop, you create a cycle of continuous improvement.
Maintain a Prioritized Backlog
As you conduct observations and brainstorm hypotheses, maintain a CRO backlog (or experiment roadmap). This is a living list of all identified issues, ideas for improvement, and hypotheses, each tagged with priority, status, and supporting data. Treat this backlog similar to a product backlog.
Regularly update priorities based on recent test results or new business goals. For instance, if a recent test revealed a big opportunity in site search, hypotheses related to search might move up in priority. A well-managed backlog also prevents “idea loss,” good ideas that aren’t tested immediately are not forgotten; they remain queued with their rationale noted. Scale Up What Works
When a test is successful, consider how to scale that improvement. Deploy the change in production (making sure it’s implemented cleanly and consistently). Then ask: can this insight be applied elsewhere? For instance, if simplifying the checkout boosted conversion, can similar simplification help on the account signup flow? Or if a new product page layout worked for one category, should we extend it to other categories (with caution to test if contexts diff er)?
This is where CRO intersects with broader UX and product development. Good ideas found via testing can inform the global design system and UX guidelines. Integrate the winning elements into your design templates, style guides, and development sprints so that other projects naturally use those proven best practices.
Mayank Patel
CEO
Mayank Patel is an accomplished software engineer and entrepreneur with over 10 years of experience in the industry. He holds a B.Tech in Computer Engineering, earned in 2013.
Modern e-commerce sites almost universally employ faceted search and filtering to help users slice through a vast catalog. Faceted search (also called guided navigation) uses those product attributes as dynamic filters, for example, filtering a clothing catalog by size, color, price range, brand, etc., all at once.
This capability is a powerful antidote to the infinite shelf’s chaos. By narrowing the visible options step by step, facets give users a sense of control and progress toward their goal. Each filter applied makes the result set smaller and more relevant.
From an implementation standpoint, faceted search relies on indexing product metadata and often involves clever algorithms to decide which filters to show. With a large catalog, there may be tens of thousands of attribute values across products, so showing every possible filter is neither feasible nor user-friendly.
Instead, e-commerce search engines dynamically present the most relevant filters based on the current query or category context. For example, if a user searches for “running shoes,” the site might immediately offer facets for men’s vs women’s, size, shoe type, etc., instead of unrelated filters like “color of laces” that add little value.
By analyzing the results set, the system can suggest the filters that are likely to matter, essentially reading the shopper’s mind about how they might want to refine the search. This dynamic filtering logic is often backed by data structures like inverted indexes for search and bitsets or specialized databases for fast faceted counts.
Even with a great taxonomy and strong filters, two different shoppers landing on the same mega-catalog will have very different needs. This is where personalization and recommendation algorithms become indispensable.
Advanced e-commerce platforms now use machine learning to dynamically curate and rank products for each user. By analyzing user data: past purchases, browsing behavior, search queries, demographic or contextual signals, algorithms can determine which subset of products out of thousands will be most relevant to that individual.
Recommendation engines are at the heart of this personalized merchandising. These systems use techniques like collaborative filtering (finding patterns from similar users’ behavior), content-based filtering (matching product attributes to user preferences), and hybrid models to surface products a shopper is likely to click or buy.
For example, a personalization engine might note that a visitor has been viewing hiking gear and thus highlight outdoor jackets and boots on the homepage for them, while another visitor sees a completely different set of featured products.
User behavior analytics feed these models: every click, add-to-cart, and dwell time becomes input to refine what the algorithm shows next. Over time, the site “learns” each shopper’s tastes. The benefit is two-fold: customers are less overwhelmed (since they’re shown a tailored slice of the catalog rather than a random assortment) and more delighted by discovery (since the selection feels relevant).
A smart strategy is to vary the merchandising approach for different contexts and customers. For first-time or anonymous visitors (where no prior data is known), showing the entire endless catalog would be counterproductive.
It’s often better to present curated selections like bestsellers or trending products. This “warm start” gives new shoppers a manageable starting point instead of a blank page or an intimidating browse-all experience. On the other hand, returning customers or logged-in users can immediately see personalized recommendations based on their history. The key is using data wisely to guide different customer segments toward discovery without ever letting them feel lost.
Modern recommendation systems also use contextual data and advanced algorithms. For instance, some platforms adjust recommendations in real-time based on the shopper’s current session behavior or even the device they use. (Showing simpler, more general suggestions on a mobile device where screen space is limited can outperform overly detailed personalization, whereas desktop can offer more nuanced recommendations.)
Cutting-edge e-commerce architectures are exploring vector embeddings and deep learning models to capture subtle relationships between products and users to enable features like visual search or chatbot-based product discovery. We can build these algorithms. Talk to us.
Guiding Customers, Not Confusing Them
UX design choices play a huge role in whether the shopping experience feels inspiring or exhausting. Just because you can display thousands of products doesn’t mean you should dump them all in front of the user at once.
Above-the-Fold Impact
The content at the top of category pages, search results, and homepages is disproportionately influential. Critical items (whether they are popular products, lucrative promotions, or highly relevant personalized picks) should be merchandised in those prime slots. As a case in point, product recommendations or banners shown in the top viewport are roughly 1.7× more effective than those displayed below the fold.
Infinite Scroll vs. Structured Browsing
There is an ongoing UX debate in ecommerce about using infinite scrolling versus traditional pagination or curated grouping. Infinite scroll automatically loads more products as the user scrolls down. This can increase engagement time, as users don’t have to click through pages and are continuously presented with new items.
However, infinite scroll can also backfire if not implemented carefully. If shoppers feel they are wading through a bottomless list, they may give up. And once they scroll far, finding their way back or remembering where something was can be difficult. User testing has found that people have a limited tolerance for scrolling, after a certain point, they either find something that catches their eye or they tune out.
A balanced approach is often best. Many sites employ a hybrid: load a substantial chunk of products with an option to “Load More” (giving the user control), or use infinite scroll but with clear segmentation and filtering options always visible.
Aside from search and filters, consider adding guided discovery tools in the UX. This might include features like dynamic product groupings, recommendation carousels, or wizards and quizzes. For example, you can programmatically create curated “shelves” on the fly: e.x. a “Best Gifts for Dog Lovers” collection that appears if the user’s behavior suggests interest in pet products.
These can be powered by the same algorithms we discussed earlier, which can identify meaningful product groupings from trends in data. Such groupings address a common UX gap: a customer may be looking for a concept (“cream colored men’s sweater” or “outdoor kitchen ideas”) that doesn’t neatly map to a single pre-defined category.
Relying solely on static navigation might give them poor results or force them to manually hunt. By dynamically detecting intent clusters and generating pages or sections for them, you improve the chance that every user finds a relevant path. It’s impractical for human merchandisers to pre-create pages for every niche query (there could be effectively infinite intents), so this is an area where algorithmic assistance shines.
Conclusion
Merchandising is no longer a downstream activity that happens after inventory is set; it’s upstream, shaping how catalogs are structured, how data is modeled, and how algorithms are trained. Teams that treat merchandising as a technical capability—not just a marketing function—will be positioned to turn complexity into competitive advantage.
Medusa exposes a RESTful API by default for both storefront and admin interactions. This straightforward approach often means easier onboarding for developers (REST is ubiquitous and simple to test). Saleor is strictly GraphQL API, all queries and mutations go through GraphQL endpoints. Vendure by design also uses GraphQL APIs for both its shop and admin endpoints.
(Vendure does allow adding REST endpoints via custom extensions if needed, but GraphQL is the primary interface).
There are pros and cons here:
GraphQL allows more flexible data retrieval (clients can ask for exactly what they need), which is great for complex UI needs and can reduce network requests. However, GraphQL adds complexity, you need to construct queries and manage a GraphQL client.
REST, on the other hand, is simple and cache-friendly but can sometimes require multiple requests for complex pages.
Importantly, for those who care about GraphQL vs REST, Medusa historically did not have a built-in GraphQL (though you could generate one via OpenAPI specs or community projects), whereas Saleor and Vendure natively support GraphQL out-of-the-box.
If GraphQL is a must-have for you or your dev team, Saleor and Vendure tick that box easily; Medusa might require some extra work or using its REST endpoints.
On the flip side, if GraphQL seems overkill for your needs, Medusa’s simpler REST approach can be a relief. (Note: GraphQL being “language agnostic” means even if Saleor’s core is Python, you can consume its API from any stack; an argument some make that the core language matters a bit less if you treat the platform as a standalone service.)
Architecture and Modular Design
All three are headless and API-first, meaning the back-end business logic is decoupled from any front-end. They each allow (or encourage) running additional services for certain tasks:
Medusa
The architecture is relatively monolithic but modular internally. You run a Medusa server which handles all commerce logic and exposes APIs. Medusa’s philosophy is to keep the core simple and let functionality be added via plugins (which run in the same process).
This design avoids a microservices explosion for small projects; everything is one Node process (plus a database and perhaps a search engine). This is great for smaller teams. Medusa uses a single database (by default Postgres) for storing data, and you can deploy it as a single service (with optional separate services for things like a storefront or an admin dashboard UI).
Saleor
Saleor’s architecture revolves around Django conventions. It’s also monolithic in the sense that the Saleor server handles everything (GraphQL endpoints, business logic, etc.) in one service, backed by a PostgreSQL database. However, Saleor encourages a slightly different extensibility model: you can extend by writing “plugins” within the core or by building “apps” (microservices) that integrate via webhooks and the GraphQL API.
This dual approach means if you want to alter core behavior deeply, you might write a Python plugin that has access to the database and internals. Or, if you prefer to keep your extension separate (or write it in another language), you can create an app that talks to Saleor’s API from the outside and is authorized via API tokens.
The latter is useful for decoupling (and is language-agnostic), but it means that extension can only interact with Saleor through GraphQL calls and webhooks, not direct DB access. Saleor’s design also supports containerization and scaling; it’s easy to run Saleor in Docker and scale out the services (plus it has support for background tasks and uses things like Celery for asynchronous jobs in newer versions).
Vendure
Vendure is structured as a Node application with a built-in modular system. It runs as a central server (plus an optional separate worker process for heavy tasks). Vendure’s internal architecture is plugin-based: features like payment processing, search, etc., are implemented as plugins that can be included or replaced.
Developers can write their own plugins to extend functionality without forking the core. Vendure uses an underlying NestJS framework, which imposes a certain organized structure (modules, providers, controllers, etc.) that leads to a clean separation of concerns.
It also means Vendure can benefit from NestJS features like dependency injection and middleware. Vendure includes a separate Worker process capability, e.x., for sending emails or updating search indexes asynchronously, a background worker can be run to offload those tasks. This is great for scalability, as heavy operations don’t block the main API event loop.
Vendure’s use of GraphQL and a strongly typed schema also means frontends can auto-generate typed SDKs (for example, generating TypeScript query hooks from the GraphQL schema).
Admin & Frontend Architecture
It’s worth noting how each handles the Admin dashboard and starter Storefronts, since these are part of architecture in a broad sense:
Medusa Admin
Medusa provides an admin panel (open source) built with React and GatsbyJS (TypeScript). It’s a separate app that communicates with the Medusa server over REST. You can deploy it separately or together with the server.
The admin is quite feature-rich (products, orders, returns, etc.) and since it’s React-based, it’s relatively straightforward for JS developers to customize or extend with new components. Medusa’s admin UI being a decoupled frontend means it’s optional, if you wanted, you could even build your own admin or integrate Medusa purely via API; but most users will use the provided one for convenience.
Saleor Admin
Saleor’s admin panel is also decoupled and is built with React (they have a design system called Macaw-UI). It interacts with the Saleor core via GraphQL. You can use the official admin or fork/customize it if needed. Saleor allows creating API tokens for private apps via the admin, so you can integrate external back-office systems easily. Saleor’s admin is quite polished and supports common tasks (managing products, orders, discounts, etc.). As with Medusa, the admin is essentially a client of the backend API.
Vendure Admin
Vendure’s admin UI comes as a default part of the package; implemented in Angular and delivered as a plugin (AdminUiPlugin) that serves the admin app. By default, a standard Vendure installation includes this admin. Administrators access it to manage catalog, orders, settings, etc.
Even if you’re not an Angular developer, you can still use the admin as provided. Vendure documentation notes that you “do not need to know Angular to use Vendure” and that the admin can even be extended with custom UI extensions written in other frameworks (they provide some bridging for that).
However, major custom changes to the admin likely require Angular skills. Some teams choose to build a custom admin interface (e.g., in React) by consuming Vendure’s Admin GraphQL API, but that’s a bigger effort. So out-of-the-box, Vendure gives you a functioning admin UI which is sufficient form many cases, though perhaps not as slick as Medusa’s or Saleor’s React-based UIs in terms of look and feel.
Storefronts
All three being headless means you’re expected to build or integrate a storefront. To jump-start development, each provides starter storefront projects:
Medusa offers a Gatsby starter that’s impressively full-featured, including typical e-commerce pages (product listings, cart, checkout) and advanced features like customer login and order returns, all wired up to Medusa’s backend. It basically feels like a ready-made theme you can customize, which is great for fast prototyping. Medusa also has starters or example integrations with Next.js, Nuxt (Vue), Svelte, and others.
Saleor provides a React/Next.js Storefront starter (sometimes referred to as “Saleor React Storefront”). It’s a Next.js app that you can use as a foundation for your shop, already configured to query the Saleor GraphQL API. This covers basics like product pages, cart, etc., but might not be as feature-complete out of the box as Medusa’s Gatsby starter (for example, handling of returns or customer accounts might require additional work).
Vendure, as mentioned, has official starters in Remix, Qwik, and Angular. These starter storefronts include all fundamental e-commerce flows (product listing with facets, product detail, search, cart, checkout, user accounts, etc.) using Vendure’s GraphQL API. The Remix and Qwik starters are particularly interesting as they focus on performance (Remix for fast server-rendered React, Qwik for ultra-fast hydration). Vendure thus gives a few choices depending on your front-end preference, though notably, there isn’t an official Next.js starter from Vendure’s team as of 2025. However, the community or third parties might provide one, and in any case, you can build one easily with their GraphQL API.
Core Features Comparison
All modern e-commerce platforms cover the basics: product listings, shopping cart, checkout, order management, etc. However, differences emerge in how features are implemented and what is provided natively vs. via extensions. Let’s compare some key feature areas and note where each platform stands out:
Product Catalog Management
Product Models
Products in Medusa can have multiple variants (for example, a T-shirt with different sizes/colors) and are grouped into Collections (a collection is essentially a group of products, often used like categories). Medusa also supports tagging products with arbitrary tags for additional grouping or filtering logic.
Medusa’s philosophy is to keep the core product model fairly straightforward, and encourage integration with external Product Information Management (PIM) or CMS if you need extremely detailed product content (e.g., rich descriptions, multiple locale content, etc.). It does provide all the basics like images, description, prices, SKUs, etc., and inventory tracking out of the box.
Saleor’s product catalog is a bit more structured. It supports organizing products by Categories and Collections. A Category in Saleor is a tree structure (like traditional e-commerce categories) and a Collection is more like a curated grouping (similar to Medusa’s collections).
Saleor also has a notion of Product Types and attributes; you can define custom product attributes and assign them to types (for example, a “Shoes” product type might have size and color attributes). These attributes can then be used as filters on the storefront.
This system provides flexibility to extend product data without modifying code, which can be powerful for store owners. Saleor supports multiple product variants per product as well (with the attributes distinguishing them).
As for tagging, Saleor doesn’t have simple tags via the admin either (at least as of that comparison), but because it has custom attributes and categories, that gap is usually filled by those features.
Saleor’s admin also allows adding metadata to products if needed, and its GraphQL API is quite adept at querying any of these structures.
Vendure combines aspects of both. It has Product entities that can have variants, and it supports a Category-like system through a feature called Collections (Vendure’s Collections are hierarchical and can have relations, effectively serving the role of categories).
Vendure also allows defining Custom Fields on products (and other entities) via configuration, meaning you can extend the data model without hacking the core. For example, if you want to add a “brand” field to products, Vendure lets you do that through config and it will generate the GraphQL schema for it. This is part of Vendure’s extensibility.
Vendure supports facets/facet values which can be used as product attributes for filtering (similar to Saleor’s attributes).
Vendure provides a highly customizable catalog structure with a bit of coding, whereas Saleor provides a lot through the admin UI, and Medusa keeps it simpler (with the option to integrate something like a CMS or PIM for additional product enrichment).
Multi-Language (Product Content)
Saleor has built-in multi-language support for product data. Product names, descriptions, etc., can be localized in multiple languages through the admin, and the GraphQL API allows querying in a specified language. This is one of Saleor’s selling points (multi-language, multi-currency).
Vendure supports multi-language by marking certain fields as translatable. Internally, it can store translations for product name, slug, description, etc., in different languages. This is configured at startup (you define which languages you support), and the admin UI allows inputting translations. It’s quite robust in that area for an open-source platform.
MedusaJS does not natively have multi-language fields for products in the core. Typically, merchants using Medusa would handle multi-language by using an external CMS to store translated content (for example, using Contentful or Strapi with Medusa, as suggested by Medusa’s docs).
The Medusa backend itself might not store a French and English version of a product title; you’d either store one in the default language or use metadata fields or region-specific products. However, Medusa’s focus on regions is more about currency and pricing differences, not translations.
Recognizing this gap, the community has created plugins to assist with multilingual catalogs (for instance, there’s a plugin that works with MeiliSearch to index products with internationalized fields). Moreover, Medusa’s Admin recently introduced multi-language support for the admin interface (so the admin UI labels can be in different languages), but that’s separate from actual product content translation.
For a primarily single-language store or one with minimal translation needs, Medusa’s approach is fine, but if you have a complex multi-lingual requirement, Saleor or Vendure may require less custom work.
Multi-Currency and Regional Settings
A highlight of Medusa is its multi-currency and multi-region support. In Medusa, you can define Regions which correspond to markets (e.g., North America, Europe, Asia) and each region has a currency, tax rate, and other settings.
For example, you can have USD pricing for a US region and EUR pricing for an EU region, for the same product. Medusa’s admin and API let you manage different prices for different regions easily. This is extremely useful for DTC brands selling internationally. Medusa also supports setting different fulfillment providers or payment providers per region.
Saleor supports multi-currency through its Channels system. You can set up multiple channels (which could be different countries, or different storefronts) each with their own currency and pricing. Saleor even allows differentiating product availability or pricing by channel.
This covers the multi-currency need effectively (Saleor’s demo often shows, for instance, USD and PLN as two currencies for two channels). Tax calculation in Saleor can integrate with services or be configured per channel as well. So, Saleor is on par with Medusa in multi-currency capabilities, and it additionally handles multi-language as mentioned. It’s truly built for multi-market operation.
Vendure has the concept of Channels too. Channels can represent different storefronts or regions (for example, an EU channel and a US channel). Each channel can have its own currency, default language, and even its own payment/shipping settings.
Vendure allows products to be in multiple channels with different prices if needed. This is basically how Vendure supports multi-currency and multi-store scenarios. It’s quite flexible, although configuring and managing multiple channels requires deliberate setup (like creating a channel, assigning products, etc.).
Vendure’s approach is powerful for multi-tenant or multi-brand setups as well (one Vendure instance could serve multiple shops if configured via channels and perhaps some custom logic).
Search and Navigation
Medusa does not have a full-text search engine built into the core; instead, it provides easy integrations for external search services. You can query products by certain fields via the REST API, but for advanced search (fuzzy search, relevancy ranking, etc.), Medusa leans on plugins.
The Medusa team has provided integration guides or plugins for MeiliSearch and Algolia, two popular search-as-a-service solutions. For example, you can plug in MeiliSearch and have typo-tolerant, fast search on your catalog.
This approach means a bit of setup but results in a better search experience than basic SQL filtering. The trade-off is that search is as good as the external system you use and if you don’t configure one, you only have simple queries.
Saleor’s approach (at least up to recently) for search was relatively basic; you could perform text queries on product name or description via GraphQL to implement a simple search bar. It did not include a built-in advanced search engine or ready connectors to one at that time.
Essentially, to get a robust search in Saleor, you might need to use a third-party service or write a plugin/app. Given that Saleor is GraphQL, one could use something like ElasticSearch by syncing data to it, but that requires development work (some community projects likely exist). In an enterprise context, it’s expected you’ll integrate a dedicated search system.
Vendure includes a built-in search mechanism which is pluggable. By default, it uses a simple SQL-based search (with full-text indexing on certain fields) to allow basic product searches and filtering by facets. For better performance or features, Vendure provides an ElasticsearchPlugin, a drop-in module that, when enabled, syncs product data to Elasticsearch and uses that for search queries.
There’s also mention of a Typesense-based advanced search plugin in development. This shows Vendure’s emphasis on modularity: you can start with the default search and later move to Elastic or another search engine by adding a plugin, without changing your storefront GraphQL queries. Vendure’s search supports faceted filtering (e.g., by attributes, price ranges, etc.), especially when using Elasticsearch. This is great for storefronts with category pages that need filtering by various criteria.
Checkout, Orders, and Payments
All three platforms handle the full checkout flow including cart, payment processing (via integrations), and order management, but with some nuances:
Checkout Process & Shopping Cart
Each platform provides APIs to manage a shopping cart (often called an “order draft” or similar) and then convert it to a completed order at checkout.
MedusaJS has built-in support for typical cart operations (add/remove items, apply discounts, etc.) and a checkout flow that can be customized. Medusa’s APIs handle everything from capturing customer info to selecting shipping and payment method, placing the order, and then updating order status as fulfillment happens.
Saleor similarly has a checkout object in its GraphQL API, where you add items, set shipping, payment, etc., and then complete an order. Saleor’s logic is quite robust, covering digital goods, multiple shipments, etc., because of its focus on enterprise scenarios.
Vendure’s API includes a “shop” GraphQL endpoint where unauthenticated or authenticated users can manage an active order (cart) and proceed to checkout. Vendure even has features like order promotions and custom order states (through its workflow API) if needed.
Payment Gateway Integrations
Medusa ships with several payment providers integrated: Stripe, PayPal, Klarna, Adyen are supported. Medusa abstracts payment logic through a provider interface, so adding a new gateway (say Authorize.net or Razorpay) is a matter of either installing a community plugin or writing a small plugin yourself to implement that interface.
Thanks to this abstraction, developers have successfully extended Medusa with many region-specific providers too. Medusa does not charge any transaction fees on top; you use your gateway directly (and with the new Medusa Cloud, the team behind Medusa emphasize they don’t take a cut either).
Saleor supports Stripe, Authorize.net, Adyen out of the box, and through its plugin system, it also has integration for others like Braintree or Razorpay. Being Python, if an API exists for a gateway, you can integrate it via a Saleor plugin in Python.
Saleor’s approach to payments is also abstracted (it had a payment plugins interface). So both Medusa and Saleor cover the common global gateways, with Saleor perhaps having a slight edge in some additional regional ones via community (e.g., Razorpay as mentioned).
Vendure has a robust plugin library that includes payments such as Stripe (there’s an official Stripe plugin), Braintree, PayPal, Authorize.net, Mollie, etc. Vendure’s documentation guides on implementing custom payment processes as well. So Vendure’s coverage is quite broad given the community contributions.
Order Management & Fulfillment
Medusa shines with some advanced features here. It supports full Return Merchandise Authorization (RMA) workflows. This means customers can request returns/exchanges, and Medusa’s admin allows processing returns, offering exchanges or refunds, tracking inventory back, etc. Medusa also uniquely has the concept of Swaps: allowing exchanges where a returned item can trigger a new order for a replacement.
These are sophisticated capabilities usually found in more expensive platforms, and having them in Medusa is a big plus for fashion and apparel DTC brands that deal with returns often. Medusa’s admin and API let you handle order status transitions (payment authorized, fulfilled, shipped, returned, etc.), and it can integrate with fulfillment providers or you can handle it manually via admin.
Saleor covers standard order management. You can see orders, update statuses, process payments (capture or refund), etc. However, a noted difference is that Saleor’s approach to returns/refunds was a bit more manual or basic at least in earlier versions.
There isn’t a built-in automated RMA flow; a store operator might have to mark an order as returned and manually create a refund in the payment gateway or such. They may improve this over time or provide some apps, but it isn’t as streamlined as Medusa’s RMA feature.
For many businesses, this might be acceptable if returns volume is low or they handle it via customer service processes. But it’s a point where Medusa clearly invested effort to differentiate (likely because Shopify’s base offering lacks easy returns handling too, and Medusa wanted to cover that gap).
Vendure’s core includes order states and a workflow that can be customized. It doesn’t natively have a “magic” RMA module built-in to the same degree, but you can implement returns by leveraging its order modifications.
Vendure does allow refunds (it has an API for initiating refunds through the payment plugins if supported), and partial fulfillments of orders, etc. If a robust returns system is needed, it might require some custom development or use of a community plugin in Vendure. Since Vendure is very modular, one could create a returns plugin that automates some of that.
Discounts and Promotions
Medusa supports discount codes and gift cards from within its own functionality. You can create percentage or fixed-amount discounts, limit them to certain products or customer groups, set expiration, etc. Medusa allows product-level discounts (specific products on sale) easily. It also has a gift card system which many platforms don’t include by default.
Saleor also supports discounts (vouchers) and gift cards. Saleor’s discount system can apply at different levels; one interesting note is that Saleor can do category-level discounts (apply to all products in a category), which might be a built-in concept. Saleor, being oriented to marketing needs, has quite an extensive promotions logic including “sales” and “vouchers” with conditions and requirements.
Vendure includes a Promotions system where you can configure promotions with conditions (e.g., order total above X, or buying a certain product) and actions (e.g., discount percentage or free shipping). It’s quite flexible and is done through config or the admin UI. Vendure doesn’t call them vouchers but you can set up coupon codes associated with promotions. Gift cards might not be in the core, but could be implemented or might exist as a plugin.
Extensibility and Customization
One of the biggest reasons to choose a headless open-source solution over a SaaS platform is the ability to customize and extend it to fit your business, rather than fitting your business into it. Let’s compare how our three contenders enable extension:
MedusaJS is designed with a plugin architecture from the ground up. Medusa encourages developers to add features via plugins rather than forking the code. A plugin in Medusa is essentially an NPM package that can hook into Medusa’s backend; it can add API endpoints, extend models, override services, etc.
For instance, if you wanted to integrate a third-party ERP, you could write a plugin that listens to order creation events and sends data to the ERP. Medusa also prides itself on allowing replacement of almost any component; you could even swap out how certain calculations work by providing a custom implementation via dependency injection (advanced use-case).
Saleor’s extensibility comes in two flavors as noted: Plugins (in-process, written in Python) and Apps (out-of-process, language-agnostic). Saleor’s plugins are used for things like payment gateways, shipping calculations, etc., and run as part of the Saleor server. If you have a specific business logic (say, a custom promotion rule), you might implement it as a plugin so that it can interact with the core logic and database.
On the other hand, Saleor introduced a concept of Saleor Apps which are somewhat analogous to Shopify apps; they are separate services that communicate via the GraphQL API and webhooks. An app can be hosted anywhere, subscribe to events (like “order created”) via webhook, and then call back to the API to do something (like add a loyalty reward, etc.).
This decouples the extension and also means you could use any programming language for the app. The admin panel allows store staff to install and manage these apps (grant permissions, etc.). The advantage of the app approach is safer upgrades (your app doesn’t hack the core) and more flexibility in tech stack; the downside is a slight overhead of maintaining a separate service and the limitations of only using the public API.
Vendure takes an extreme plugin-oriented approach. Almost all features in Vendure (payments, search, reviews, etc.) are implemented as plugins internally, and you can include or exclude them in your server setup. Writing a Vendure plugin means writing a TypeScript class that can tap into the lifecycle of the app, add new GraphQL schema fields, override resolvers or services, etc.
The core of Vendure provides the commerce primitives, and you compose the rest. This is why some view Vendure as ideal if you have very custom requirements. The community has contributed plugins for many needs (reviews system, wishlist, loyalty points, etc.). Vendure’s official plugin list includes not only integrations (like payments, search) but also features (like a plugin that adds support for multi-vendor marketplace functionality, which is something a company might need to add to create a marketplace).
Enterprise Support and Hosting
As of 2025, Medusa has introduced Medusa Cloud, a managed hosting platform for Medusa projects. This caters to teams that want the benefits of Medusa without dealing with server ops. The Medusa Cloud focuses on easy deployments (with Git integration and preview environments) and transparent infrastructure-based pricing (no per-transaction fees).
This shows that Medusa is evolving to serve more established businesses that might require uptime guarantees and easier scaling. Apart from that, Medusa’s core being open-source means you can self-host on AWS, GCP, DigitalOcean, etc., using Docker or Heroku or any Node hosting. Many early-stage companies go that route to save cost.
Saleor Commerce (the company) offers Saleor Cloud, which is a fully managed SaaS version of Saleor. It’s targeted at mid-to-large businesses with a pricing model that starts in the hundreds of dollars per month. This service gives you automatic scaling, backups, etc., and might be attractive if you don’t want to run your own servers.
However, it’s a significant cost that perhaps only later-stage businesses or those with no devops inclination would consider. Saleor’s open-source can also be self-hosted in containers; some agencies specialize in hosting Saleor. Because Saleor is more complex to set up (with services like Redis, etc., possibly needed), the cloud option is a convenient but pricey offering.
Vendure’s company does not currently offer a public cloud SaaS. They focus on the open-source product and consulting. That said, because Vendure is Node, you can host it similarly easily on any Node-friendly platform. Some third-party hosting or PaaS might even have one-click deployments for Vendure.
From a total cost of ownership perspective: all three being open-source means you avoid licensing fees of traditional enterprise software. If self-hosted, your costs are infrastructure (cloud servers, etc.) and developer time.
Saleor might incur higher dev costs if you need both Python and front-end expertise, and possibly higher infrastructure if GraphQL/Python stack needs more scaling.
Medusa and Vendure could be more resource-efficient for moderate scale (Node can handle a lot on modest hardware, and you can optimize with cluster mode, etc.).
Performance and Scalability Considerations
For any growing business, the platform needs to handle increased load: more products, more traffic, flash sales, etc. Let’s consider how each platform fares and what it means for your project’s scalability:
MedusaJS (Node/Express, REST):
Medusa’s lightweight nature can be an advantage for performance. With a lean Express.js core and no GraphQL parsing overhead, each request can be handled relatively fast and with low memory usage.
Node.js can handle a high number of concurrent requests efficiently (non-blocking I/O), so Medusa can serve quite a lot of traffic on a single server. If more power is needed, you can run multiple instances behind a load balancer.
Also, because Medusa can be containerized easily (they provide a Docker deployment guide), scaling horizontally in the cloud is straightforward. For database scaling, you rely on whatever your SQL DB (Postgres, etc.) can do; typically vertical scaling or read replicas if needed.
Medusa being stateless fits cloud scaling well. For small-to-medium businesses, Medusa’s performance is more than enough, and even larger businesses can scale it out.
Saleor (Python/Django, GraphQL):
Saleor is built on Django, which is a robust framework used in many high-scale sites. Performance-wise, GraphQL adds some overhead per request (parsing queries, resolving fields). However, GraphQL also can reduce the number of requests the client needs to make (one query vs multiple REST calls).
Saleor’s architecture can be scaled vertically (powerful servers) or horizontally by running multiple app instances behind a gateway. Because it uses Django, it typically will use more memory per process than a Node process, and handling extremely high concurrency might require more instances.
That said, Saleor has been shown to handle enterprise loads when properly configured (using caching for queries, etc.). Saleor’s advantage is that if you use their cloud or a similar setup, they already incorporate scalability best practices (like auto-scaling on high traffic).
For a new store, Saleor will likely run just fine on modest infrastructure (it’s easy to start with say a $20/mo Heroku dyno or similar), but as you grow, the resource usage might grow faster compared to a Node solution.
Vendure (Node/NestJS, GraphQL):
Vendure, using NestJS and GraphQL, has a performance profile somewhere between Medusa and Saleor. Node.js is generally very performant with I/O, and NestJS adds a bit of overhead due to its structure but also helps by providing tools like a built-in GraphQL engine (Apollo Server).
Vendure can use Node’s ability to handle concurrent connections much better. The use of GraphQL means each request might do more work on the server to assemble the response, but Vendure’s team likely optimized common queries.
Vendure also has the concept of a Worker process for heavy tasks, which means if you have computationally intensive jobs (e.g., rebuilding a search index, sending bulk emails), those can be offloaded, keeping the main API responsive.
Vendure being TypeScript means you can catch performance issues at compile time (to an extent) and ensure you’re using proper types for big data operations.
Handling Growth:
If you anticipate massive scale (millions of users, hundreds of thousands of orders, etc.), Saleor’s approach might be appealing due to its enterprise orientation and cloud offering. However, it doesn’t mean Medusa or Vendure can’t handle it, they absolutely can if engineered well. In fact, the lack of heavy abstractions in Medusa could be a benefit when fine-tuning for performance.
For fast-growing DTC brands (think going from 100 orders/day to 1000+ orders/day after a few influencer hits), Medusa and Vendure provide a lot of agility. Medusa’s focus on being “lightweight, flexible architecture, ideal for speed and adaptability” makes it a strong choice for those who need to iterate quickly. You can optimize or add capabilities as needed without waiting on vendor roadmaps.
Saleor is more like a high-performance sports car; it’s equipped for high speed, but you need a skilled driver (developers who know GraphQL/Python well) to push it to its limits and maintain it.
All three can be customized heavily. If you foresee the need to implement highly unique business logic or integrate unusual systems, consider how you’d do it on each:
With Medusa, you may likely write a plugin in Node or directly modify the server code (since it’s simple JS/Express). Great for quickly adding something like “I want to apply a custom discount rule for VIP customers; just drop in some JS in the right place.”
With Saleor, consider whether it can be done with an App (external service using the API) or needs an internal plugin. If internal, you need Python dev skills and understanding of Saleor’s internals. If external, you need to be comfortable with GraphQL and possibly running an additional service.
With Vendure, write a plugin in TypeScript. If you like structured code and strongly typed schemas, this is very satisfying. If not, it might feel like an extra ceremony.
A Few Final Words
MedusaJS, Saleor, and Vendure all tick the “headless, open-source, flexible” boxes but each wins in different places.
MedusaJS shines for lean, fast-moving teams that want to hack, extend, and own their stack.
Saleor is best when you need enterprise-grade stability, global readiness, and a GraphQL-first mindset.
Vendure appeals to TypeScript-heavy teams that want strong typing, modular plugins, and deep architectural control.
Your right choice depends less on which is “objectively best” and more on which aligns with your team’s skills, your growth plans, and the trade-offs you’re willing to make. In the end, the winner is the one that fits your context.
Below are the key planning steps and best practices:
Assess Your Magento Implementation (Data and Customizations)
Start by auditing your current Magento setup in detail. This involves:
Catalog and Data Model Compatibility
Review how your product catalog, categories, variants, pricing, and customers are structured in Magento, and map these to Medusa’s data models. Medusa has its own schemas for products, variants, orders, customers, etc., which are more straightforward than Magento’s (Magento uses an EAV model for products with attribute sets, which doesn’t directly exist in Medusa).
Identify any custom product attributes or complex product types (e.g. Magento bundle or configurable products) that will need special handling. For example, Magento “configurable products” with multiple options will likely map to a product with multiple variants in Medusa.
Make sure Medusa’s model can accommodate all necessary data (it usually can, via built-in fi elds or using metadata for custom attributes). Early on, defi ne how each entity (products, SKUs, categories, customers, orders, discount codes, etc.) will translate into the Medusa schema.
Extension and Module Inventory
Magento installations often have numerous third-party modules and custom extensions providing extra features (from SEO tools to loyalty programs). List out all installed Magento modules and custom code. You can generate a module list via CLI: for example, running
php bin/magento module:status > modules_list.txt
will output all modules in your Magento instance. Using this list, evaluate each module’s functionality:
Determine which features are native in Medusa (so you won’t need an equivalent extension). Medusa covers many commerce basics like product management, multi-currency, pricing rules, discounts, etc., out-of-the-box.
For features not built into Medusa, check if an existing Medusa plugin or integration can provide that capability. Medusa has an ecosystem of official and community plugins (for payments, CMS, search, analytics, etc.).
For truly custom or business-specific features that neither Medusa core nor a plugin covers, plan how to reimplement them in Medusa. This might involve writing a custom Medusa plugin or using Medusa’s APIs to integrate an external service. The good news is Medusa’s plugin system allows you to extend any part of the backend or admin with custom logic relatively easily. For instance, if you have a complex promotion rule module in Magento, you might recreate it as a Medusa plugin hooking into the order calculation fl ow. Prioritize which custom functions are critical to carry over and design solutions for them.
Data Volume and Quality
Consider the volume of data to migrate (number of SKUs, customers, orders, etc.) and its cleanliness. It’s also a chance to eliminate outdated or low-value data (for example, old customer records, or products that are no longer sold) so you start “clean” on Medusa.
Note: It’s often helpful to create a mapping document that enumerates Magento entities and how each will be handled in Medusa (e.g., Magento customer entity -> Medusa customer, including
addresses; Magento reward points -> integrate XYZ loyalty service via API). This becomes your blueprint.
Define a Migration Strategy and Timeline
With requirements understood, the next step is to choose a migration approach. For most enterprises, a phased migration strategy is highly recommended over a “big bang” cutover.
In a phased approach, you gradually transition pieces of functionality from Magento to Medusa in stages, rather than switching everything in one night. This greatly reduces risk and complexity. Key benefits of a phased replatforming include the ability to test and fi x issues in isolation, minimal downtime, and continuous business operation during the transition. By migrating one component at a time, you can validate that piece (e.g. product catalog) in Medusa while the rest of the system still runs on Magento. If something goes wrong, it’s easier to roll back a single component than a whole system.
Plan out the phases that make sense for your business. A typical plan (detailed in the next section) might be:
Phase 1: Build a new Medusa-based storefront (while Magento remains the backend) Phase 2: Migrate product/catalog data to Medusa Phase 3: Migrate cart & checkout (orders) to Medusa
Each phase should be treated as a mini-project with its own design, implementation, and QA. Determine clear exit criteria for each phase (e.g. “new product catalog on Medusa shows all items correctly and inventory syncs with ERP”) before moving on.
Also decide on timing: choose low-traffic periods for cutovers of critical pieces, and ensure business stakeholders are aligned on any necessary content freeze or downtime. For example, when you migrate the product catalog, you may enforce a freeze on adding new products in Magento to avoid divergence while data is copied. Similarly, a final order migration might require a short checkout downtime to ensure no orders are lost. All such events should be scheduled and communicated.
During planning, also outline a data synchronization strategy. In a phased migration, you’ll have a period where Magento and Medusa run in parallel for different functions. You must plan how data will stay consistent between them:
For example, in Phase 1, Magento is still the source of truth for products and orders, but a new Medusa/Next.js frontend might be reading some data. You can use Magento’s REST APIs or GraphQL to fetch live data from Magento into the new frontend. If you are also sending some data to Medusa (in later phases), you might temporarily feed updates both ways (Magento to Medusa and vice versa) to keep systems in sync.
you might implement synchronization scripts or utilize a Medusa migration plugin that periodically pulls data from Magento and pushes to Medusa. During Phase 2, for instance, you could run a one-time import of all products, then set up a job to sync any new or updated products from Magento to Medusa until Magento is fully retired.
Plan for the final cutover data sync: when you switch completely to Medusa, you’ll need to migrate any delta data that changed on Magento since the last bulk migration. For instance, just before Phase 3 (moving checkout), you might import all orders placed on Magento up to that minute into Medusa, so that order history is preserved. Similarly, migrate any new customers or reviews that were added in Magento during the transition.
It’s better to set up Medusa development and staging environments early in the project. Stand up a Medusa instance (or a few) in a sandbox environment and start populating it with sample data. This will be used to develop and test migration scripts. Make sure you have a staging database for Medusa (e.g., PostgreSQL or MySQL, whichever you choose for Medusa) and that the team is familiar with deploying Medusa. Medusa provides a CLI to bootstrap a new project quickly, for example:
npx create-medusa-app@latest
This will create a new Medusa server project (and optionally a Next.js storefront if you choose) on your machine.. You can also initialize a Medusa project via the Medusa CLI (medusa new command) to include a seeded store for testing.
As part of setup, you’ll create an Admin user for the Medusa backend and explore the Medusa Admin dashboard to ensure you know how to manage products, orders, etc., in the new system.. Familiarize your ops/administrative staff with the Medusa admin UI early, so they can provide feedback on any critical gaps (for instance, Magento has some specific admin grids or reports you might need to replicate).
Finally, communicate and coordinate the migration plan with all stakeholders. The engineering team, product managers, operations, customer support, and leadership should all understand the phased plan, the timeline, and any expected impacts (like minor UI changes in Phase 1 or slight differences in workflows in the new system). Migration at this scale is as much about change management as it is about technology. With a solid plan in place, you can now proceed to execution.
With planning done, it’s time to implement the migration. We will outline a phased step-by-step execution that gradually moves your e-commerce backend, admin, and storefront from Magento to MedusaJS.
Each phase below corresponds to a portion of functionality being migrated, aligned with best practices to minimize risk. Throughout each phase, maintain rigorous testing and quality assurance before proceeding to the next stage.
Phase 1: Launch a New Headless Storefront (Decoupling the Frontend)
The first phase is all about decoupling your storefront (UI) from Magento’s integrated frontend. In Magento, the frontend (themes/templates) is tightly coupled with the backend. We’ll replace this with a new headless storefront (for example, a Next.js or Gatsby application) that initially still uses Magento’s backend via APIs.
Phase 1: Introduce a new headless storefront and CMS while Magento remains the backend. The new frontend (e.g., Next.js app) fetches data from Magento’s APIs. Steps in Phase 1:
Develop the new Frontend
Choose a modern frontend framework such as Next.js, Gatsby, or Nuxt to build your storefront. Medusa provides starter templates for Next.js that you can use as a foundation (or you can build from scratch).. Design the frontend to consume data from an API rather than directly from a database.
In this phase, the API will be Magento’s. Magento 2 supports a REST API and a GraphQL API out-of-the-box. For example, your new product listing page in Next.js could call Magento’s REST endpoints (or GraphQL queries) to fetch products and categories.
This essentially treats Magento as a headless service. You might build a small middleware layer or utilize Next.js API routes to securely proxy calls to Magento’s API if needed, or call Magento APIs directly from the frontend (taking care of CORS and authentication).
Many enterprise teams opt to implement a BFF (Backend-For-Frontend)—a lightweight Node.js server that sits between the frontend and Magento—to aggregate and format data. This is optional but can help in mapping Magento’s API responses to a simpler format for the UI.
Replicate design and UX
Reimplement your storefront’s design on the new tech stack. Try to keep the user experience consistent with the old site initially, to avoid confusing customers during the transition.
You can, of course, take the opportunity to improve UX, but major changes might be better introduced gradually. Importantly, ensure global elements like header, footer, navigation, and product URL structure remain familiar or have proper redirects, so SEO and usability aren’t hurt.
Connect to Magento’s Data
Use Magento’s API to feed the necessary data. For instance, the product listing page will call an endpoint like /rest/V1/products (Magento’s REST) or a GraphQL query to retrieve products and categories. You will likely need an API authentication token to access Magento’s APIs.
Magento’s REST API can be accessed by generating an integration token, or as in the Medusa migration plugin, by programmatically obtaining an admin token. For example, the Medusa migration module uses a POST to Magento’s V1/integration/admin/token endpoint with admin credentials to get a token:
const response = await fetch(`${magentoBaseUrl}/rest/default/V1/integration/admin/token`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ username: MAGENTO_ADMIN_USER, password: MAGENTO_ADMIN_PASS })
});
const token = await response.text();
// Use this token in Authorization header for subsequent Magento API calls
Proxy live operations to Magento
In this phase, Magento still handles all commerce operations (cart, checkout, customer accounts). Your new frontend will simply redirect or proxy those actions. For example, when a user clicks “Add to Cart” or goes to checkout, you might hand off to Magento’s existing pages or send a request to Magento’s cart API.
It’s acceptable if the checkout flow temporarily takes users to Magento’s domain or uses Magento’s UI, as this will be addressed in later phases. The goal of Phase 1 is not to eliminate Magento, but to introduce the new frontend and CMS while Magento underpins it behind the scenes.
(For teams that cannot rebuild the entire frontend in one go, an alternative approach is to do a partial storefront migration. Using tools like Next.js Rewrite rules, you can incrementally replace certain Magento pages with new ones. For example, you could serve product detail pages via the new Next.js app, but keep checkout on Magento until later. This way, you flip portions of the UI to the new stack gradually. While this complicates routing, it offers a very controlled rollout. Many teams, however, would prefer to launch the whole new frontend at once as described above, for a cleaner architecture.)
Phase 2: Migrate Product Catalog, Inventory, and Pricing to Medusa
In Phase 2, the focus shifts to the backend. Here we migrate the core catalog data: products, categories, inventories, prices, from Magento’s database into Medusa. By the end of this phase, Medusa will become the source of truth for all product information, while Magento may still handle the shopping cart and orders until Phase 3.
Steps in Phase 2:
Set up Medusa Server and Modules
If you haven’t already, install and configure your Medusa backend service. This involves spinning up a Medusa server (Node.js application) connected to a database (Medusa supports PostgreSQL, MySQL, etc., with an ORM).
Medusa comes with a default product module, order module, etc. Make sure your Medusa instance is running and you can access the Medusa Admin panel. In the Admin, you might manually create a couple of sample products to see how data is structured, then delete them, just to get familiar.
Also configure any essential settings in Medusa (currencies, regions, etc.) to align with your business; for example, if your Magento store had multiple currencies or websites, configure Medusa’s regions and currency settings accordingly.
Data Export from Magento
Extract the product catalog data from Magento. There are a few approaches for this:
Use Magento’s REST API to fetch all products, categories, and related data (images, attributes, inventory levels, etc.). Magento’s API allows filtering and pagination to get data in batches.
Alternatively, do a direct database export from Magento’s MySQL. For example, run SQL queries or use Magento’s built-in export tool to get products to CSV. However, Magento’s data schema is quite complex (spread across many tables due to EAV), so using the API (which presents consolidated data) can simplify the process.
In either case, you will likely need to transform the data format to match Medusa. For instance, Magento’s product may have a separate price table, whereas in Medusa the price might be a fi eld or managed via price lists. Plan to capture product names, SKUs, descriptions, category relationships, images, variants/options (size, color, etc.), stock levels, and any custom attributes you identified during planning.
Data Import into Medusa
Insert the exported data into Medusa’s database using the Medusa APIs or programmatically. You have a few options:
Use Medusa’s Admin REST API: Medusa exposes endpoints to create products, variants, etc. You could write scripts that read the Magento data and call Medusa’s /admin/products endpoints to create products one by one. This is straightforward but could be slow for very large catalogs unless you batch requests.
Use a Medusa Script or Plugin: Because Medusa is a Node.js system, you can write a custom script (within the Medusa project or a separate Node script) that uses Medusa’s internal services or repository layer to insert data directly. For example, within a Medusa plugin context, you could use Medusa’s ProductService to create products in bulk. You essentially create a reusable migration tool: the plugin fetches products from Magento and then calls Medusa’s createProductsWorkflow to import them.. The benefit of doing this as a plugin is that you can rerun it or even schedule it (e.g., to periodically sync data during transition).
CSV/JSON import via code: Another approach is to export data to a structured fi le (CSV/JSON) from Magento and then write a Node.js script using Medusa’s SDK or direct DB calls to import. This is custom but might be simpler for one-time use.
Verify and Adjust Data in Medusa
Once imported, use the Medusa Admin dashboard to spot-check the catalog. Do all products appear with correct titles, prices, variants, and images? Are categories properly assigned? This is where you may need to adjust some mappings.
For example, Magento product “attributes” that were used for filtering (color, brand, etc.) might be represented in Medusa as product tags or metadata. If so, you might convert Magento attributes to Medusa tags for fi ltering purposes. Likewise, customer groups or tier pricing in Magento could map to Medusa’s customer groups and price lists (Medusa has a Price List feature for special pricing).
Point Frontend Product Calls to Medusa
After product data is in Medusa, switch your new frontend to use Medusa’s Store APIs for product and inventory data. Up to now, in Phase 1, the Next.js app was likely calling Magento’s API to list products. Now you will update those API calls to query the Medusa backend instead. Medusa provides a Store API (unauthenticated endpoints) for products, collections, etc.
For example, your product listing page might hit GET /store/products on Medusa (which returns a list of products in JSON). This cutover should be invisible to users: the data is the same conceptually, just coming from a diff erent backend. Because we still haven’t moved the cart/checkout, you may need to ensure product IDs or SKUs align so that when a user adds to cart (going to Magento), it still recognizes the product. If you maintain the exact same SKUs and identifi ers in Medusa as in Magento, you can cross-reference easily. You might keep a mapping of Magento product ID to Medusa product ID if needed just for the interim.
Phase 3: Migrate Cart, Checkout, and Order Processing to Medusa
Phase 3 tackles the transactional components: shopping cart, checkout, payments, and order management. This is usually the most complex part of the migration because it affects the core of your e-commerce operations and customer experience.
Steps in Phase 3:
Rebuild Cart & Checkout Functionality
Since your frontend is already decoupled, you will now integrate it with Medusa’s cart and order APIs instead of Magento’s. Medusa provides a Cart API and Order API to support typical e-commerce flows. For example:
When a user clicks “Add to Cart”, you will call POST /store/carts (to create a cart) and then POST /store/carts/{cart_id}/line-items to add a product line item. Medusa’s store API will handle the cart persistence (likely in Medusa’s DB).
The cart state (items, totals, etc.) can be retrieved via GET /store/carts/{cart_id} and displayed to the user.
For checkout, Medusa supports typical checkout flows: you’ll collect shipping address, select shipping method, select payment method, etc., and update the cart via API calls (e.g., POST /store/carts/{id}/shipping-method, POST /store/carts/{id}/payment-session).
Finally, placing an order is usually done by completing the payment session which in turn creates an Order in Medusa (this is often POST /store/carts/{id}/complete).
Essentially, you need to replicate in the new frontend all the steps that Magento’s checkout used to provide. If you use a Magento one-page checkout or multiple steps, design a corresponding fl ow in the new frontend calling Medusa. The heavy-lifting of order creation, tax calculation, etc., can be done by Medusa if configured properly.
Configure Payment Providers in Medusa
Set up payment processing within Medusa to replace Magento’s payment integrations. Medusa has a plugin-based system for payments, with support for providers like Stripe, PayPal, Adyen, etc. If you were using, say, Authorize.net or Stripe in Magento, you can install the Medusa plugin for the same (or use a new provider if desired).
Please make sure that your Medusa instance is confi gured with API keys for the payment gateway and that the frontend integrates the payment provider’s UI or SDK appropriately (for example, Stripe Elements on the frontend and the @medusajs/stripe plugin on backend).
Set up Shipping and Tax in Medusa
Medusa provides a default shipping option management and integrates with shipping carriers via plugins (if needed). For taxes, Medusa can handle simple tax rules or integrate with tax services. Configure any necessary fulfillment providers (like if using Shippo, Shipstation, etc.) and tax rates or services (like TaxJar) so that the Medusa order fl ow computes totals correctly.
Migrate or Sync Customer Accounts (if required for checkout)
Depending on how you want to handle customer logins, you might at this point migrate customer accounts to Medusa. Medusa has its own customer management and authentication.
However, if you want logged-in customers to be able to see their profile or use saved addresses during checkout on the new system, you’ll need to move customer data now. Migrating customer accounts means importing users’ basic info and hashed passwords.
Medusa uses bcrypt for hashing passwords by default; Magento (depending on version) might use different hash algorithms (MD5 with salt in M1, SHA-256 in M2). One strategy is to migrate all users with a flag requiring a password reset (simplest, but impacts user experience), or attempt to import password hashes and adjust Medusa’s authentication to accept Magento’s hash format (advanced).
Order Management and Fulfillment
Recreate any order processing workflows in Medusa. For example, if Magento was integrated to an ERP or OMS (Order Management System) for fulfillment, now Medusa must integrate with those systems. Medusa can trigger webhooks or you can use its event system to notify external systems of new orders.
If your team uses an admin interface to manage orders (e.g., print packing slips, update order status), the Medusa Admin or a connected OMS should be used. The Medusa Admin dashboard allows viewing and updating orders, creating fulfillments, etc., similar to Magento admin.
You might need to train the operations team to use Medusa’s admin for processing orders (creating shipments, marking orders as shipped, etc.). If any custom post-order logic existed (like custom fraud checks, or split fulfillment logic), implement that either via Medusa’s plugins or an external microservice triggered by Medusa’s order events.
Cutover Cart/Checkout on Frontend
Once Medusa’s checkout is fully implemented and tested in a staging environment, you will switch the production frontend to use it. This is a big milestone, effectively, Magento is being removed from the live customer path.
Ensure to coordinate a deployment when the site is quiet. It can be wise to disable the ability to place orders on Magento shortly before the switch (for instance, put Magento in maintenance mode for checkout) to avoid any orders being placed on Magento at the same time.
When you deploy the new frontend that connects to Medusa for cart/checkout, run through a suite of test orders (ideally in a staging environment or with test payment modes on production just before enabling real payments).
Data Migration (Orders)
You may want to migrate historical order data from Magento into Medusa, for continuity in order history. This can be done via script or gradually. However, migrating thousands of old orders might not be strictly necessary for operations; some teams keep Magento read-only for a time for reference or build an archive.
If you do import past orders, you might insert them as Medusa orders via the Admin API or directly in DB. The critical part in Phase 3 is to ensure any ongoing orders (like open carts or pending orders) are either transferred or completed. For example, you might cut off new cart creation on Magento a few hours before, but allow any user who was in checkout to finish (or provide a clear notice to refresh and start a new cart on the new system).
Now that your store is on MedusaJS, leverage the benefits of the new architecture and follow best practices to get the most out of it. Here are some recommended architecture patterns and practices post-migration for an enterprise-scale Medusa setup:
Composable, Microservices-Friendly Architecture
With Medusa as the core commerce service, your overall e-commerce platform is now “composable.” This means you can plug in and replace components at will. Continue to embrace this modular approach.
For example, if you want to add a new capability like AI-driven recommendations, you can integrate a specialized microservice for that via Medusa’s APIs or events, without monolithic constraints. Each piece of the system (CMS, search, payments, etc.) can scale independently and be updated on its own schedule.
Scalability and Cloud Deployment
Deploy Medusa in a cloud-native way to achieve maximal scalability and reliability. Containerize the Medusa server (Docker) and use Kubernetes or similar to manage scaling. Because Medusa is stateless (except the database), you can run multiple instances for load balancing.
Scale the database vertically or use read replicas as needed; e.g., a managed PostgreSQL service that can handle enterprise load. Use auto-scaling for your frontend as well (if using Next.js, consider serverless deployment for dynamic functions and a CDN for static pre-rendered pages).
Monitor resource usage and performance; one benefit of Medusa’s headless setup is you can put a caching layer or CDN in front of certain API calls if needed (though be careful to cache only safe GET requests like product browsing, not cart actions).
Maintain a Clean Extension Layer
As you add features to your commerce platform, use Medusa’s extension points (plugins, modules, and integrations) rather than modifying core code. This keeps the core stable and upgradable. Medusa’s plugin system supports adding custom routes, middleware, or overriding core service logic in a contained manner.
If an enterprise feature is missing, consider building a plugin and possibly contributing it back to the Medusa community. This way, your platform remains maintainable. For example, if down the road you need a complex promotion engine beyond what Medusa offers, build it as a separate service or plugin that interfaces with orders, rather than forking the Medusa core.
Cost and Maintenance Considerations
Without Magento’s license or heavy hosting requirements, you may fi nd cost savings. However, budget for the new components (hosting for Medusa, any new SaaS services like CMS or search). Keep track of total cost of ownership.
Over time, Medusa’s lower resource footprint can be a win; for example, a Node service might use less memory and CPU under load than Magento did. If you switched from Magento Commerce (paid) to Medusa (free OSS), you’ve eliminated license fees as well.
By approaching the process in phases—starting with the storefront, then moving catalog data, and finally checkout and orders—you minimize risk while steadily unlocking the benefits of a headless, modular architecture.
The result is a faster, more scalable platform that adapts to your business needs instead of limiting them. With MedusaJS in place, your enterprise is better equipped for future growth, innovation, and long-term effi ciency.