Building a Modern eCommerce Platform with Optimizely PaaS and Next.js SaaS Frontend - Part 3: Testing, Deployment & Monitoring
Introduction & Testing Strategy Overview
In Part 1 and Part 2, we covered the architecture and frontend implementation of our headless eCommerce platform. Now it's time to tackle the operational side—the testing, deployment, and monitoring strategies that make this architecture production-ready.
Testing a headless architecture presents unique challenges. We need to test not just our frontend components, but also the integration between our Next.js app and Optimizely Content Graph, our custom REST APIs, and the entire user journey from content creation to purchase completion.
Our testing strategy follows a testing pyramid approach:
- Unit Tests – Individual components and utilities
- Integration Tests – API integrations and data flow
- E2E Tests – Complete user journeys
- Visual Regression Tests – UI consistency across changes
We use Storybook for component development and visual testing, Jest and React Testing Library for unit tests, Playwright for end-to-end testing, and GitHub Actions for continuous integration.
Our deployment strategy leverages Azure App Service for the frontend and Optimizely DXP for the backend, with automated deployments, blue-green rollouts, and comprehensive monitoring using Application Insights.
Architecture Diagram: Testing, Deployment & Monitoring Flow
Below is a high-level diagram showing how all the operational pieces fit together:
This diagram illustrates the end-to-end lifecycle:
- Code changes trigger CI/CD workflows
- Automated tests validate functionality and UI
- Successful builds deploy via blue-green slots
- Production telemetry flows into Application Insights and Core Web Vitals dashboards
Component Testing with Storybook
Storybook Setup & Configuration
Storybook serves as our component development environment, documentation system, and visual testing platform. It allows us to build components in isolation, test different states and props, and catch visual regressions before they reach production.
// .storybook/main.ts
import type { StorybookConfig } from '@storybook/nextjs';
const config: StorybookConfig = {
stories: ['../src/**/*.stories.@(js|jsx|ts|tsx)'],
addons: [
'@storybook/addon-essentials',
'@storybook/addon-interactions',
'@storybook/addon-a11y',
'@storybook/addon-viewport',
'storybook-addon-design-tokens',
],
framework: { name: '@storybook/nextjs', options: {} },
typescript: {
check: false,
reactDocgen: 'react-docgen-typescript',
},
staticDirs: ['../public'],
};
export default config;
Our stories follow a consistent pattern covering different component states and edge cases:
// src/components/molecules/ProductCard/ProductCard.stories.tsx
import type { Meta, StoryObj } from '@storybook/react';
import { ProductCard } from './ProductCard';
const meta: Meta<typeof ProductCard> = {
title: 'Molecules/ProductCard',
component: ProductCard,
parameters: {
layout: 'centered',
docs: {
description: {
component: 'A product card component displaying product info and add-to-cart actions.',
},
},
},
argTypes: {
onAddToCart: { action: 'addToCart' },
product: { description: 'Product data object', control: { type: 'object' } },
},
};
export default meta;
type Story = StoryObj<typeof meta>;
export const Default: Story = {
args: {
product: {
id: '1',
name: 'Sample Product',
price: 29.99,
image: '/images/sample-product.jpg',
slug: 'sample-product',
},
},
};
Visual Regression Testing with Chromatic
We use Chromatic for automated visual regression testing. Each pull request generates visual diffs and flags unintended UI changes:
# .github/workflows/visual-regression.yml
name: Visual Regression Tests
on:
pull_request:
branches: [main]
jobs:
visual-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '18'
cache: 'npm'
- run: npm ci
- run: npm run build-storybook
- uses: chromaui/action@v1
with:
projectToken: ${{ secrets.CHROMATIC_PROJECT_TOKEN }}
buildScriptName: build-storybook
exitZeroOnChanges: trueUnit Testing with Jest & React Testing Library
We use Jest and React Testing Library to verify component behavior and user interactions:
// src/components/molecules/ProductCard/ProductCard.test.tsx
import { render, screen, fireEvent } from '@testing-library/react';
import { ProductCard } from './ProductCard';
describe('ProductCard', () => {
const mockProduct = {
id: '1',
name: 'Test Product',
price: 29.99,
image: '/test-image.jpg',
slug: 'test-product',
};
it('renders product info', () => {
render(<ProductCard product={mockProduct} />);
expect(screen.getByText('Test Product')).toBeInTheDocument();
expect(screen.getByText('$29.99')).toBeInTheDocument();
});
it('calls onAddToCart when clicked', () => {
const mockOnAdd = jest.fn();
render(<ProductCard product={mockProduct} onAddToCart={mockOnAdd} />);
fireEvent.click(screen.getByRole('button', { name: /add to cart/i }));
expect(mockOnAdd).toHaveBeenCalledWith('1');
});
});Integration Testing with MSW
We test component and API integration using Mock Service Worker (MSW):
// src/mocks/handlers.ts
import { graphql, rest } from 'msw';
export const handlers = [
graphql.query('GetProduct', (req, res, ctx) =>
res(ctx.data({ product: { id: '1', name: 'Mock Product', price: 29.99 } }))
),
rest.get('/api/cart', (req, res, ctx) => res(ctx.json({ items: [], total: 0 }))),
];E2E Testing with Playwright
Playwright Configuration
Playwright validates full user journeys from browsing to checkout:
// playwright.config.ts
import { defineConfig, devices } from '@playwright/test';
export default defineConfig({
testDir: './e2e',
retries: process.env.CI ? 2 : 0,
reporter: 'html',
use: { baseURL: process.env.BASE_URL || 'http://localhost:3000', trace: 'on-first-retry' },
projects: [
{ name: 'chromium', use: { ...devices['Desktop Chrome'] } },
{ name: 'firefox', use: { ...devices['Desktop Firefox'] } },
],
webServer: { command: 'npm run dev', url: 'http://localhost:3000', reuseExistingServer: !process.env.CI },
});Critical User Journey
We test the purchase flow end-to-end:
// e2e/product-purchase.spec.ts
import { test, expect } from '@playwright/test';
test('user can browse and purchase', async ({ page }) => {
await page.goto('/');
await page.fill('[data-testid="search-input"]', 'laptop');
await page.click('[data-testid="search-button"]');
await expect(page.locator('[data-testid="product-card"]')).toBeVisible();
await page.click('[data-testid="product-card"]:first-child');
await expect(page.getByRole('button', { name: /add to cart/i })).toBeVisible();
await page.click('[data-testid="add-to-cart-button"]');
await page.click('[data-testid="cart-icon"]');
await expect(page.locator('h1')).toContainText('Shopping Cart');
});CI/CD Pipeline & Deployment
We automate testing, builds, and deployments with GitHub Actions:
# .github/workflows/deploy.yml
name: CI/CD Pipeline
on:
push:
branches: [main]
env:
NODE_VERSION: '18'
AZURE_WEBAPP_NAME: 'your-app-name'
jobs:
test-build-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- run: npm ci
- run: npm run test:ci
- run: npm run build
- uses: azure/webapps-deploy@v2
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
publish-profile: ${{ secrets.AZURE_PUBLISH_PROFILE }}
package: ./Blue-Green Deployments
We use blue-green deployments to swap production slots safely and minimize downtime.
// scripts/deploy-blue-green.ts
import { execSync } from 'child_process';
const currentSlot = process.env.AZURE_SLOT || 'blue';
const newSlot = currentSlot === 'blue' ? 'green' : 'blue';
console.log(`Deploying to ${newSlot}`);
execSync(`az webapp deployment slot swap --name my-app --resource-group my-group --slot ${newSlot} --target-slot ${currentSlot}`);Monitoring & Observability
Application Insights
We use Azure Application Insights for telemetry, tracing, and performance metrics:
// lib/monitoring/app-insights.ts
import { ApplicationInsights } from '@microsoft/applicationinsights-web';
export const appInsights = new ApplicationInsights({
config: {
connectionString: process.env.NEXT_PUBLIC_APP_INSIGHTS_CONNECTION_STRING,
enableAutoRouteTracking: true,
},
});
appInsights.loadAppInsights();Custom Events & Metrics
// lib/monitoring/telemetry.ts
import { appInsights } from './app-insights';
export const telemetry = {
trackProductView: (id: string) => appInsights.trackEvent({ name: 'ProductView', properties: { id } }),
trackAddToCart: (id: string) => appInsights.trackEvent({ name: 'AddToCart', properties: { id } }),
};Core Web Vitals
// lib/monitoring/performance.ts
import { getCLS, getFID, getLCP } from 'web-vitals';
import { appInsights } from './app-insights';
[getCLS, getFID, getLCP].forEach((fn) =>
fn((metric) => appInsights.trackMetric({ name: `WebVital.${metric.name}`, average: metric.value }))
);Conclusion
This three-part series has explored the full lifecycle of a modern Optimizely + Next.js eCommerce platform—from architecture and implementation to testing, deployment, and monitoring.
Key Takeaways
- Testing Strategy – Comprehensive pyramid: unit, integration, E2E, and visual testing
- CI/CD Pipeline – Automated testing and blue-green deployments with GitHub Actions + Azure
- Observability – Application Insights telemetry and Core Web Vitals tracking
- Resilience – Health checks and rollbacks ensure production stability
Modern architecture works because it’s built for change. With robust testing, automation, and observability, you can evolve quickly while maintaining confidence in every release.
The complete series:
- Part 1: Architecture Overview
- Part 2: Frontend Implementation
- Part 3: Testing, Deployment & Monitoring (this post)