notpanel
ServicesPricingFAQGiveaway
notpanel

The fastest and most affordable SMM panel. Trusted by 1M+ users worldwide.

Product

  • Services
  • Pricing
  • Why NotPanel
  • About
  • Developers
  • Blog
  • FAQ

Legal

  • Terms of Service
  • Privacy Policy
  • Refund Policy

Connect

  • Contact Us
  • support@notpanel.com

© © 2026 NotPanel. All rights reserved.

All posts
Engineering April 24, 2026· 11 min read

Why we rebuilt NotPanel on Next.js + PostgreSQL (and what we learned)

Twelve years operating wholesale infrastructure under the SMM industry. We rebuilt our public platform from scratch in 2025–2026. Here's what we replaced, what surprised us, and the choices we'd make again.


For most of the last decade, the SMM industry has run on a single PHP codebase that gets re-skinned and re-sold by hundreds of operators. We've watched orders flow through that codebase as the wholesale source underneath it. In 2025–2026 we rebuilt our public-facing platform from scratch.

This is the engineering story. What we replaced. What surprised us. The choices we'd make again. Some of them might surprise you — they surprised us.

What we replaced

The legacy platform was the same script most public panels run. Its problems aren't unique to us. They're industry-wide:

  • Float-based money. Account balances stored as MySQL FLOAT. Cumulative drift on high-volume accounts ran into pennies per month. Sounds harmless until you trace it across a multi-year history.
  • No row-level locks on debits. Two parallel orders against a $10 balance with two $6 charges could both succeed. The fix exists in the codebase but is gated behind config flags most deployments never enable.
  • Cron-driven order processing. A PHP script ran every minute, looked for pending orders, forwarded them. The script could miss an order if it crashed mid-run. Or process the same order twice if scheduling overlapped.
  • No webhook signing. Outbound webhooks were plain POSTs. Anyone who guessed a customer's webhook URL could fabricate completion events and trigger their downstream logic.
  • Plain-text provider keys. Provider API credentials sat as plain VARCHARs in MySQL. A single database dump exposed every upstream relationship.
Reality check

We could have patched these one at a time. We chose to rebuild because too many of the problems were structural — fixing them required changes to data types and concurrency primitives the legacy codebase wasn't designed around. Sometimes the cheapest fix is a new house, not a new roof.

The stack we picked

The constraints we set ourselves at the start:

  • Type safety end-to-end. No string-typed money, no untyped JSON.
  • Database constraints, not application validation, as the source of money safety.
  • Workers separable from the web tier. Restarts and scaling decoupled.
  • No client-side rendering for content pages. Everything indexable, fast on slow connections.
  • One codebase per surface. No separate API server, no separate admin app.

What we ended up with:

  • Next.js 16 with App Router. Server components by default, client components only where interactivity demands.
  • tRPC for the dashboard's data layer. Type-safe end-to-end without OpenAPI scaffolding.
  • PostgreSQL 16 with Drizzle ORM. NUMERIC(20,8) for money, CHECK constraints on every balance column, FOREIGN KEYs on every relation, no nullable columns where presence matters.
  • Redis for sessions, rate limiting, the SSE pub/sub backbone, a few hot-path caches.
  • BullMQ for everything async. Order processing, webhook delivery, status syncing, scheduled jobs.
  • Better Auth for sessions, OAuth, 2FA. Less mature ecosystem than the alternatives, but the type discipline matched the rest of the stack.
  • Caddy + Docker Compose on a single VPS. Boring. Works. We considered Kubernetes for about an hour before remembering we didn't need it.

Money safety — the central rebuild lesson

The thing we got most wrong on the legacy platform — and most right on the rebuild — is money safety under concurrency. The contract we landed on:

  1. SELECT … FOR UPDATE on the user row inside a transaction.
  2. Read balance from the locked row.
  3. Compute charge server-side. Never trust a client-supplied price.
  4. UPDATE … SET balance = balance - $charge WHERE id = $1 AND balance >= $charge RETURNING balance. If 0 rows returned, abort.
  5. Insert a transactions ledger row.
  6. Commit.
  7. Only then forward the order to the upstream provider.

Steps 4 and 7 are the non-obvious ones. The WHERE balance >= $charge on the UPDATE is belt-and-suspenders with the FOR UPDATE — even if the lock leaks somehow, the conditional UPDATE makes double-debit impossible. And forwarding to upstream only after commit means a refund/cancel racing in mid-flight can't end up with us paying upstream while the user got their money back.

It's the kind of pattern that's obvious once you've been burned by its absence. We had been.

Atomic claim-before-act for workers

Every worker that calls an external API with side effects — placing orders, sending payments, delivering webhooks — uses the same claim pattern:

UPDATE orders
   SET status = 'processing'
 WHERE id = $1
   AND status IN ('pending', 'retrying')
RETURNING *;

If 0 rows returned, the order was already claimed by another worker or cancelled by the user. We exit without calling upstream. This eliminates the worst race in the legacy platform: a cancel coming in between "read pending order" and "call upstream API" used to result in both a refund AND a fulfilled order. Pure money loss.

Idempotency as a contract

Order placement requires a caller-supplied request_id. We don't generate it server-side. Without a real idempotency key, you can't safely retry — every retry creates a new order. By making callers provide one, we move the responsibility (and the value) to the right side of the boundary.

We considered server-generated keys for backward compat with legacy clients. Rejected. Server-generation hides the contract — the moment a server-generated key appears, callers stop thinking about idempotency, and we end up back at duplicate orders. Better to break legacy callers than silently undermine the contract.

Server-generated idempotency keys aren't idempotency. They're a lie about idempotency.
Reader poll

Of operations engineers we surveyed, what's the single most underrated discipline in money-handling code?

Atomic claim before external calls36%
DB-level CHECK constraints on balances24%
Caller-supplied idempotency keys21%
Append-only transaction ledger19%
Internal poll, 47 engineers across operations, support, finance, March 2026.

What we'd do differently

  • Started with PGlite for tests sooner. We spent weeks running integration tests against Docker'd PostgreSQL before trying PGlite. PGlite gives per-test isolation in milliseconds. Should have used it from week one.
  • Picked Drizzle over Prisma earlier. We tried Prisma first. The reflective code generation and schema drift between SQL and TypeScript wasted enough time that switching paid for itself. Drizzle's "the SQL is the source of truth" philosophy fits money code better.
  • Avoided the temptation to use a queue-as-a-service. BullMQ on our own Redis was cheaper and simpler than any managed queue we evaluated. The "infrastructure debt" people warn about for self-hosted queues didn't materialise.
  • Sentry should have been wired on day one. We shipped without it for a few weeks. Every silent failure in that window cost 2× the diagnostic time it would have with stack traces in hand.

What surprised us

Biggest surprise: how much of the value came from things with nothing to do with the user-facing product. Observability. Money-safety invariants enforced at the database layer. Atomic claim semantics. Signed webhooks. None of these show up in a feature list. All of them are why operations don't wake us up at 3am.

Second surprise: how much faster the rebuild went than expected because of type safety. End-to-end types between database, backend, and frontend caught the kind of bug — wrong field name, missing currency unit, off-by-one shape mismatch — that takes hours to find in dynamic code, in seconds at compile time.

30
DB tables
Up from ~20 in legacy
NUMERIC(20,8)
Money type
Was FLOAT
100%
Signed webhooks
Was 0%
1 VPS
Production footprint
Boring is fine

Open questions we're still figuring out

The rebuild isn't done. Things we're still iterating on:

  • The right boundary between dashboard tRPC and the public REST API. They share a lot of underlying logic but have different rate-limit and auth contracts.
  • Multi-region deployment. Single-VPS Caddy + Docker is fine for now but won't scale past a single geography. We'll add a second region when latency from a specific market starts hurting.
  • Internationalisation strategy. Currently cookie-based; SEO benefits from URL-prefixed locales but introduces routing complexity. We haven't fully resolved the trade-off. (Update: as of May 2026 we ship both — locale-prefixed public URLs, cookie-based dashboard.)

If you're an engineer evaluating SMM panels — or anyone curious about how a niche category gets modernised — that's the story so far. The platform is open for use at notpanel.com. API documentation lives at /developers. Architecture notes continue to land on this blog as we ship them.

Continue reading

Reseller business

Bulk SMM orders: how agencies process 100,000+ orders per day

Tactics

Choosing the right SMM panel: 12 signals of a real source vs a reseller

Engineering

Webhook signing for SMM panels: HMAC-SHA256 in production