Unit Testing in 2026: No Excuses Left
The tooling has caught up. AI writes test scaffolds, frameworks are fast, and there is no good reason to ship untested code anymore.

Three years ago, the excuses for not writing tests were at least plausible. Test frameworks were slow. Setting up mocks was tedious. Writing tests took longer than writing the code. The ROI was not obvious for a 3-person startup sprinting to product-market fit. I did not agree with those excuses, but I understood them.
In 2026, those excuses are gone. Vitest runs a full test suite in the time it used to take Jest to boot up. AI tools scaffold comprehensive test files from a function signature. TypeScript catches entire categories of bugs at compile time, narrowing what actually needs testing. Docker Compose spins up real databases for integration tests in seconds. The gap between "no tests" and "well-tested" has never been smaller.
The Modern Testing Landscape
The testing ecosystem has consolidated around a few excellent tools, and the fragmentation that used to make test setup a research project has largely disappeared.
Vitest vs Jest: The Decision Is Made
For any new project, Vitest is the default choice. Not because Jest is bad — Jest is a solid, battle-tested framework — but because Vitest is measurably better in every dimension that matters for modern development.
Speed: Vitest uses esbuild for transformation and runs tests in worker threads with native ESM support. In my projects, the same test suite runs 3-5x faster under Vitest compared to Jest. A suite that took 45 seconds in Jest completes in 12 seconds in Vitest.
Configuration: Vitest reuses your Vite config. If you are already using Vite for your build (and in 2026, most projects are), there is zero additional configuration. Compare that to Jest's moduleNameMapper, transform, transformIgnorePatterns, and the inevitable troubleshooting of why some ESM package does not work.
Compatibility: Vitest implements a Jest-compatible API. describe, it, expect, beforeEach, afterEach, vi.fn(), vi.mock() — all work as expected. Migrating from Jest to Vitest is mostly a find-and-replace of jest.fn() to vi.fn().
// vitest.config.ts — this is often all you need
import { defineConfig } from 'vitest/config';
export default defineConfig({
test: {
globals: true,
environment: 'node',
coverage: {
provider: 'v8',
reporter: ['text', 'json', 'html'],
exclude: ['node_modules/', 'dist/', '**/*.d.ts', '**/*.config.*'],
},
},
});
When to stick with Jest: If you have a large existing Jest test suite (500+ tests) and no immediate pain points, migration is not urgent. Jest continues to receive updates and works fine. But for new projects or projects with fewer than 100 tests, Vitest is the obvious choice.
The Testing Pyramid in Practice
The traditional testing pyramid (many unit tests, fewer integration tests, few E2E tests) remains correct in principle, but the boundaries have shifted.
Unit tests verify individual functions, utility methods, and pure business logic. They should be fast (under 10ms each), have no external dependencies, and test behavior rather than implementation.
Integration tests verify that components work together — API routes with database queries, service methods with external API calls, React components with their state management. These are slower (100-500ms each) but catch bugs that unit tests miss.
E2E tests verify complete user workflows through the actual application. These are the slowest (5-30 seconds each) and most fragile, but they catch bugs that nothing else does. Tools like Playwright have made E2E tests significantly more reliable than the Selenium era.
The ratio I target: 70% unit, 20% integration, 10% E2E. The exact numbers matter less than having representation at each level.
What to Test and What to Skip
Not all code needs the same testing rigor. Knowing where to invest testing effort is as important as knowing how to write tests.
Always Test
Business logic and domain rules. If the code implements a business rule — pricing calculation, permission check, data validation, state machine transition — it needs tests. These are the rules that, if wrong, cost money or create security vulnerabilities.
// pricing.ts
export function calculateOrderTotal(items: OrderItem[], discount?: Discount): number {
const subtotal = items.reduce((sum, item) => sum + item.price * item.quantity, 0);
if (!discount) return subtotal;
if (discount.type === 'percentage') {
return subtotal * (1 - discount.value / 100);
}
if (discount.type === 'fixed') {
return Math.max(0, subtotal - discount.value);
}
return subtotal;
}
// pricing.test.ts
import { describe, it, expect } from 'vitest';
import { calculateOrderTotal } from './pricing';
describe('calculateOrderTotal', () => {
const items: OrderItem[] = [
{ id: '1', name: 'Burger', price: 12.99, quantity: 2 },
{ id: '2', name: 'Fries', price: 4.99, quantity: 1 },
];
it('calculates subtotal without discount', () => {
expect(calculateOrderTotal(items)).toBe(30.97);
});
it('applies percentage discount', () => {
const discount = { type: 'percentage' as const, value: 10 };
expect(calculateOrderTotal(items, discount)).toBeCloseTo(27.873);
});
it('applies fixed discount', () => {
const discount = { type: 'fixed' as const, value: 5 };
expect(calculateOrderTotal(items, discount)).toBe(25.97);
});
it('does not go below zero with fixed discount', () => {
const discount = { type: 'fixed' as const, value: 50 };
expect(calculateOrderTotal(items, discount)).toBe(0);
});
it('handles empty items array', () => {
expect(calculateOrderTotal([])).toBe(0);
});
});
Data transformation functions. Any function that transforms data from one shape to another — API response mappers, form data serializers, CSV parsers — needs tests with representative inputs and edge cases.
Error handling paths. What happens when the API returns a 500? When the input is null? When the file does not exist? Error paths are where bugs hide because developers test the happy path manually and assume the error path works.
Utility functions. String formatters, date helpers, array utilities, validation functions. These are used everywhere, and a bug in a utility function multiplies across the codebase.
Skip or Test Lightly
Direct framework wrappers. If your function is a thin wrapper around a well-tested framework method, testing it tests the framework, not your code. A React component that renders <h1>{title}</h1> does not need a test verifying that the h1 renders.
Configuration files. Static configuration, constants, type definitions — these do not need tests. TypeScript already validates them at compile time.
Third-party library integration glue. Code that calls stripe.charges.create() with parameters from your data model does not need a unit test that mocks Stripe and verifies you called it. It needs an integration test that verifies the end-to-end charge flow.
Mocking Strategies
Mocking is where tests go from "verifying behavior" to "testing implementation details." The goal is to mock as little as possible while keeping tests fast and deterministic.
The Dependency Injection Approach
Instead of mocking modules, pass dependencies as parameters. This makes testing natural and avoids the vi.mock() magic that couples tests to module structure.
// user-service.ts
export function createUserService(db: Database, emailClient: EmailClient) {
return {
async createUser(data: CreateUserInput): Promise<User> {
const user = await db.users.create({ data });
await emailClient.send({
to: user.email,
template: 'welcome',
data: { name: user.name },
});
return user;
},
};
}
// user-service.test.ts
import { describe, it, expect, vi } from 'vitest';
import { createUserService } from './user-service';
describe('createUserService', () => {
it('creates user and sends welcome email', async () => {
const mockUser = { id: '1', name: 'Jane', email: 'jane@example.com' };
const db = {
users: { create: vi.fn().mockResolvedValue(mockUser) },
};
const emailClient = {
send: vi.fn().mockResolvedValue(undefined),
};
const service = createUserService(db as any, emailClient as any);
const result = await service.createUser({ name: 'Jane', email: 'jane@example.com' });
expect(result).toEqual(mockUser);
expect(emailClient.send).toHaveBeenCalledWith({
to: 'jane@example.com',
template: 'welcome',
data: { name: 'Jane' },
});
});
});
No vi.mock() calls, no module path strings, no order-dependent setup. The test is explicit about what is real and what is fake.
When to Use vi.mock()
Module-level mocking is appropriate when you cannot control the dependency injection — typically when testing React components that import modules directly or when testing code that uses environment-specific globals.
// When you genuinely need module mocking
vi.mock('./api-client', () => ({
fetchUser: vi.fn().mockResolvedValue({ id: '1', name: 'Test User' }),
}));
MSW for API Mocking
For tests that make HTTP requests, Mock Service Worker (MSW) intercepts network requests at the service worker level. This is superior to mocking fetch or axios because it tests your actual HTTP client code.
import { setupServer } from 'msw/node';
import { http, HttpResponse } from 'msw';
const server = setupServer(
http.get('https://api.example.com/users/:id', ({ params }) => {
return HttpResponse.json({
id: params.id,
name: 'Test User',
email: 'test@example.com',
});
}),
http.post('https://api.example.com/orders', async ({ request }) => {
const body = await request.json();
return HttpResponse.json(
{ id: 'order-1', ...body, status: 'created' },
{ status: 201 }
);
})
);
beforeAll(() => server.listen());
afterEach(() => server.resetHandlers());
afterAll(() => server.close());
AI-Assisted Test Generation
AI tools have genuinely changed the economics of test writing. Generating the initial test scaffold — the boilerplate, the basic happy-path cases, the standard edge cases — is exactly the kind of repetitive, pattern-based work that LLMs handle well.
What AI Does Well
- Generating test files from function signatures and type definitions
- Suggesting edge cases you might not think of (empty arrays, null values, boundary numbers)
- Writing repetitive setup/teardown boilerplate
- Creating mock objects that match interface shapes
- Generating parameterized test cases for functions with many input combinations
What AI Does Poorly
- Understanding business context ("this should fail because users cannot order after midnight" is domain knowledge)
- Testing complex state interactions across multiple function calls
- Writing meaningful integration tests that exercise real system boundaries
- Deciding what to test and what not to test
- Writing tests that verify behavior rather than implementation
The Practical Workflow
My workflow with AI-assisted testing:
- Write the function or module.
- Ask the AI to generate a test file with basic cases.
- Review and fix the generated tests — AI-generated tests often test implementation details rather than behavior.
- Add domain-specific cases the AI missed.
- Run the tests and verify they actually catch bugs by temporarily breaking the code.
The AI does step 2 (which used to be the tedious part) and I focus on steps 3-5 (which require judgment). This reduces test writing time by roughly 50-60% for standard utility and service code.
Testing React Components
Component testing has evolved significantly. The shift from Enzyme's shallow rendering to React Testing Library's user-centric testing philosophy changed how I think about component tests.
What to Test in Components
- Does the component render correctly with different props?
- Do user interactions (click, type, select) trigger the expected behavior?
- Does the component handle loading, error, and empty states?
- Does conditional rendering work correctly?
What Not to Test in Components
- Internal state values (test what the user sees, not what React tracks)
- Implementation details (which function was called, how many re-renders happened)
- Styling (use visual regression testing for that)
// OrderSummary.test.tsx
import { render, screen } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import { OrderSummary } from './OrderSummary';
describe('OrderSummary', () => {
const defaultProps = {
items: [
{ id: '1', name: 'Burger', price: 12.99, quantity: 2 },
{ id: '2', name: 'Fries', price: 4.99, quantity: 1 },
],
onCheckout: vi.fn(),
};
it('displays order items with prices', () => {
render(<OrderSummary {...defaultProps} />);
expect(screen.getByText('Burger')).toBeInTheDocument();
expect(screen.getByText('$25.98')).toBeInTheDocument(); // 12.99 * 2
expect(screen.getByText('Fries')).toBeInTheDocument();
expect(screen.getByText('$4.99')).toBeInTheDocument();
});
it('displays total', () => {
render(<OrderSummary {...defaultProps} />);
expect(screen.getByText('Total: $30.97')).toBeInTheDocument();
});
it('calls onCheckout when checkout button is clicked', async () => {
const user = userEvent.setup();
render(<OrderSummary {...defaultProps} />);
await user.click(screen.getByRole('button', { name: /checkout/i }));
expect(defaultProps.onCheckout).toHaveBeenCalledOnce();
});
it('shows empty state when no items', () => {
render(<OrderSummary {...defaultProps} items={[]} />);
expect(screen.getByText(/your cart is empty/i)).toBeInTheDocument();
expect(screen.queryByRole('button', { name: /checkout/i })).not.toBeInTheDocument();
});
it('disables checkout button during loading', () => {
render(<OrderSummary {...defaultProps} isLoading={true} />);
expect(screen.getByRole('button', { name: /checkout/i })).toBeDisabled();
});
});
Notice: no getByTestId unless necessary, no checking internal state, no verifying CSS classes. The tests read like a description of what a user sees and does.
Testing API Routes
API route tests are integration tests that verify your HTTP handler works correctly with its middleware, validation, and response formatting.
// Using supertest with an Express app
import { describe, it, expect, beforeAll, afterAll } from 'vitest';
import request from 'supertest';
import { app } from '../app';
import { db } from '../database';
describe('POST /api/orders', () => {
let authToken: string;
beforeAll(async () => {
await db.migrate.latest();
await db.seed.run();
// Get auth token for test user
const loginResponse = await request(app)
.post('/api/auth/login')
.send({ email: 'test@example.com', password: 'testpassword' });
authToken = loginResponse.body.token;
});
afterAll(async () => {
await db.destroy();
});
it('creates an order with valid data', async () => {
const response = await request(app)
.post('/api/orders')
.set('Authorization', `Bearer ${authToken}`)
.send({
items: [{ productId: 'prod-1', quantity: 2 }],
deliveryAddress: '123 Main St',
});
expect(response.status).toBe(201);
expect(response.body).toMatchObject({
id: expect.any(String),
status: 'pending',
items: expect.arrayContaining([
expect.objectContaining({ productId: 'prod-1', quantity: 2 }),
]),
});
});
it('returns 401 without auth token', async () => {
const response = await request(app)
.post('/api/orders')
.send({ items: [{ productId: 'prod-1', quantity: 2 }] });
expect(response.status).toBe(401);
});
it('returns 400 with invalid data', async () => {
const response = await request(app)
.post('/api/orders')
.set('Authorization', `Bearer ${authToken}`)
.send({ items: [] });
expect(response.status).toBe(400);
expect(response.body.errors).toBeDefined();
});
it('returns 400 when product does not exist', async () => {
const response = await request(app)
.post('/api/orders')
.set('Authorization', `Bearer ${authToken}`)
.send({
items: [{ productId: 'nonexistent', quantity: 1 }],
deliveryAddress: '123 Main St',
});
expect(response.status).toBe(400);
expect(response.body.message).toContain('not found');
});
});
Testing Database Operations
Testing database operations requires a real database. Mocking SQL queries tests the mock, not the query. Use a test database that gets recreated for each test run.
// database.test.ts
import { describe, it, expect, beforeEach } from 'vitest';
import { PrismaClient } from '@prisma/client';
const prisma = new PrismaClient({
datasources: { db: { url: process.env.TEST_DATABASE_URL } },
});
beforeEach(async () => {
// Clean tables in dependency order
await prisma.orderItem.deleteMany();
await prisma.order.deleteMany();
await prisma.user.deleteMany();
});
describe('Order repository', () => {
it('creates order with items and calculates total', async () => {
const user = await prisma.user.create({
data: { email: 'test@example.com', name: 'Test User' },
});
const order = await prisma.order.create({
data: {
userId: user.id,
items: {
create: [
{ productName: 'Burger', price: 12.99, quantity: 2 },
{ productName: 'Fries', price: 4.99, quantity: 1 },
],
},
total: 30.97,
},
include: { items: true },
});
expect(order.items).toHaveLength(2);
expect(order.total).toBe(30.97);
expect(order.userId).toBe(user.id);
});
it('enforces unique email constraint', async () => {
await prisma.user.create({
data: { email: 'duplicate@example.com', name: 'User 1' },
});
await expect(
prisma.user.create({
data: { email: 'duplicate@example.com', name: 'User 2' },
})
).rejects.toThrow();
});
});
Test Database Setup in CI
# In GitHub Actions
services:
postgres:
image: postgres:16
env:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: test_db
ports:
- 5432:5432
With Prisma, run npx prisma migrate deploy before tests to apply schema. With raw SQL, run migration scripts. The test database starts fresh for every CI run.
Coverage Metrics That Matter
Code coverage is a useful signal, but it is a terrible target.
The Coverage Trap
Optimizing for 100% coverage leads to tests that exist to hit uncovered lines rather than to verify behavior. I have seen tests like this:
// This test exists purely for coverage
it('should have a default export', () => {
expect(module).toBeDefined();
});
This test verifies nothing meaningful. It adds to the coverage number while adding zero confidence.
Meaningful Coverage Targets
Instead of a blanket coverage threshold, I use different targets for different code categories:
| Category | Target | Rationale |
|---|---|---|
| Business logic / Domain | 90%+ | Bugs here cost money |
| API routes / Controllers | 80%+ | Integration boundaries |
| Utility functions | 95%+ | Widely used, high impact |
| UI components | 70%+ | Visual bugs caught by E2E |
| Configuration / Setup | No target | Static, rarely breaks |
Configure coverage thresholds per directory:
// vitest.config.ts
export default defineConfig({
test: {
coverage: {
thresholds: {
'src/domain/**': { statements: 90, branches: 85 },
'src/api/**': { statements: 80, branches: 75 },
'src/utils/**': { statements: 95, branches: 90 },
},
},
},
});
The Mutation Testing Alternative
If coverage numbers feel hollow, mutation testing provides a more honest assessment. Tools like Stryker introduce small changes (mutations) to your code and verify that your tests catch them. A mutation that survives (tests still pass) indicates a gap in your test suite.
npx stryker run
Mutation testing is slow (runs your entire test suite for every mutation) but revelatory. I run it monthly rather than on every commit.
Test-Driven Development in Practice
I practice TDD selectively, not dogmatically. It works exceptionally well for some code and adds friction to other code.
Where TDD Shines
Pure functions with clear specifications. If you know the inputs and expected outputs before writing the code, writing the test first is natural and productive.
Bug fixes. Before fixing a bug, write a test that reproduces it. Then fix the code until the test passes. This guarantees the bug stays fixed.
Refactoring. Write tests for existing behavior before refactoring. The tests act as a safety net, verifying that the refactored code produces identical results.
Where TDD Adds Friction
Exploratory code. When you are figuring out how something should work — experimenting with an API, prototyping a UI, exploring a data structure — writing tests first slows you down. Write the code, stabilize the interface, then add tests.
UI components. Testing Library tests for React components are best written after the component exists, because the component's rendered output (which the tests assert on) is not known until you build it.
The Red-Green-Refactor Cycle
When I do TDD, I follow the classic cycle strictly:
- Red: Write a test that fails (because the code does not exist yet).
- Green: Write the minimum code to make the test pass. Do not optimize.
- Refactor: Clean up the code while keeping all tests green.
The discipline of step 2 — writing the minimum code — is the most important and most frequently violated. The temptation to write "the real implementation" immediately defeats the purpose of TDD, which is to let tests drive the design incrementally.
The Investment Pays Compound Interest
Every test you write has a shelf life measured in years. A test written today will catch regressions in next week's refactor, next month's feature addition, and next year's framework upgrade. The 10 minutes you spend writing a test saves 2 hours of debugging six months from now, but you never see that savings explicitly — you just never have the bug.
The hardest part of testing is not the tooling or the techniques. It is the discipline to write the test now, when the code is fresh in your mind, rather than adding it to the backlog where it quietly dies. The tooling excuse is gone. The speed excuse is gone. The only excuse left is prioritization, and shipping tested code should not be negotiable.
Danil Ulmashev
Full Stack Developer
Need a senior developer to build something like this for your business?